WorldWideScience

Sample records for modeling hlm analyses

  1. LANL High-Level Model (HLM) database development letter report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-01

    Traditional methods of evaluating munitions have been able to successfully compare like munition`s capabilities. On the modern battlefield, however, many different types of munitions compete for the same set of targets. Assessing the overall stockpile capability and proper mix of these weapons is not a simple task, as their use depends upon the specific geographic region of the world, the threat capabilities, the tactics and operational strategy used by both the US and Threat commanders, and of course the type and quantity of munitions available to the CINC. To sort out these types of issues, a hierarchical set of dynamic, two-sided combat simulations are generally used. The DoD has numerous suitable models for this purpose, but rarely are the models focused on munitions expenditures. Rather, they are designed to perform overall platform assessments and force mix evaluations. However, in some cases, the models could be easily adapted to provide this information, since it is resident in the model`s database. Unfortunately, these simulations` complexity (their greatest strength) precludes quick turnaround assessments of the type and scope required by senior decision-makers.

  2. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety

    DEFF Research Database (Denmark)

    Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C.

    2007-01-01

    Background: Hierarchical linear modeling (HLM) can reveal complex relationships between longitudinal outcome measures and their covariates under proper consideration of potentially unequal error variances. We demonstrate the application of FILM to the study of magnetic resonance imaging (MRI)-der...

  3. How to do Meta-Analysis using HLM software

    OpenAIRE

    Petscher, Yaacov

    2013-01-01

    This is a step-by-step presentation of how to run a meta-analysis using HLM software. Because it's a variance known model, it is not run through the GUI, but batch mode. These slides show how to prepare the data and run the analysis.

  4. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    Analysing real-world systems for vulnerabilities with respect to security and safety threats is a difficult undertaking, not least due to a lack of availability of formalisations for those systems. While both formalisations and analyses can be found for artificial systems such as software......, this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... are based on (quite successful) ad-hoc techniques. We believe they can be significantly improved beyond the state-of-the-art by pairing them with static analyses techniques. In this paper we present an approach to both formalising those real-world systems, as well as providing an underlying semantics, which...

  5. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  6. Current and future research on corrosion and thermalhydraulic issues of HLM cooled reactors and on LMR fuels for fast reactor systems

    International Nuclear Information System (INIS)

    Knebel, J.U.; Konings, R.J.M.

    2002-01-01

    Heavy liquid metals (HLM) such as lead (Pb) or lead-bismuth eutectic (Pb-Bi) are currently investigated world-wide as coolant for nuclear power reactors and for accelerator driven systems (ADS). Besides the advantages of HLM as coolant and spallation material, e.g. high boiling point, low reactivity with water and air and a high neutron yield, some technological issues, such as high corrosion effects in contact with steels and thermalhydraulic characteristics, need further experimental investigations and physical model improvements and validations. The paper describes some typical HLM cooled reactor designs, which are currently considered, and outlines the technological challenges related to corrosion, thermalhydraulic and fuel issues. In the first part of the presentation, the status of presently operated or planned test facilities related to corrosion and thermalhydraulic questions will be discussed. First approaches to solve the corrosion problem will be given. The approach to understand and model thermalhydraulic issues such as heat transfer, turbulence, two-phase flow and instrumentation will be outlined. In the second part of the presentation, an overview will be given of the advanced fuel types that are being considered for future liquid metal reactor (LMR) systems. Advantages and disadvantages will be discussed in relation to fabrication technology and fuel cycle considerations. For the latter, special attention will be given to the partitioning and transmutation potential. Metal, oxide and nitride fuel materials will be discussed in different fuel forms and packings. For both parts of the presentation, an overview of existing co-operations and networks will be given and the needs for future research work will be identified. (authors)

  7. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  8. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  9. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming

    2015-01-01

    Modern organisations are complex, socio-technical systems consisting of a mixture of physical infrastructure, human actors, policies and processes. An in-creasing number of attacks on these organisations exploits vulnerabilities on all different levels, for example combining a malware attack...... with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...

  10. Properties of unirradiated HTGR core support and permanent side reflector graphites: PGX, HLM, 2020, and H-440N

    International Nuclear Information System (INIS)

    Engle, G.B.

    1977-05-01

    Candidate materials for HTGR core supports and permanent side reflectors--graphite grades 2020 (Stackpole Carbon Company), H-440N (Great Lakes Carbon Corporation), PGX (Union Carbide Corporation), and HLM (Great Lakes Carbon Corporation)--are described and property data are presented. Properties measured are bulk density; tensile properties including ultimate strength, modulus of elasticity, and strain at fracture; flexural strength; compressive properties including ultimate strength, modulus of elasticity, and strain at fracture; and chemical impurity content

  11. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  12. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  13. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  14. Radiobiological analyse based on cell cluster models

    International Nuclear Information System (INIS)

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  15. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  16. Seismic Soil-Structure Interaction Analyses of a Deeply Embedded Model Reactor – SASSI Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nie J.; Braverman J.; Costantino, M.

    2013-10-31

    This report summarizes the SASSI analyses of a deeply embedded reactor model performed by BNL and CJC and Associates, as part of the seismic soil-structure interaction (SSI) simulation capability project for the NEAMS (Nuclear Energy Advanced Modeling and Simulation) Program of the Department of Energy. The SASSI analyses included three cases: 0.2 g, 0.5 g, and 0.9g, all of which refer to nominal peak accelerations at the top of the bedrock. The analyses utilized the modified subtraction method (MSM) for performing the seismic SSI evaluations. Each case consisted of two analyses: input motion in one horizontal direction (X) and input motion in the vertical direction (Z), both of which utilized the same in-column input motion. Besides providing SASSI results for use in comparison with the time domain SSI results obtained using the DIABLO computer code, this study also leads to the recognition that the frequency-domain method should be modernized so that it can better serve its mission-critical role for analysis and design of nuclear power plants.

  17. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  18. STH-CFD Codes Coupled Calculations Applied to HLM Loop and Pool Systems

    Directory of Open Access Journals (Sweden)

    M. Angelucci

    2017-01-01

    Full Text Available This work describes the coupling methodology between a modified version of RELAP5/Mod3.3 and ANSYS Fluent CFD code developed at the University of Pisa. The described coupling procedure can be classified as “two-way,” nonoverlapping, “online” coupling. In this work, a semi-implicit numerical scheme has been implemented, giving greater stability to the simulations. A MATLAB script manages both the codes, oversees the reading and writing of the boundary conditions at the interfaces, and handles the exchange of data. A new tool was used to control the Fluent session, allowing a reduction of the time required for the exchange of data. The coupling tool was used to simulate a loop system (NACIE facility and a pool system (CIRCE facility, both working with Lead Bismuth Eutectic and located at ENEA Brasimone Research Centre. Some modifications in the coupling procedure turned out to be necessary to apply the methodology in the pool system. In this paper, the comparison between the obtained coupled numerical results and the experimental data is presented. The good agreement between experiments and calculations evinces the capability of the coupled calculation to model correctly the involved phenomena.

  19. Modeling Citable Textual Analyses for the Homer Multitext

    Directory of Open Access Journals (Sweden)

    Christopher William Blackwell

    2016-12-01

    Full Text Available The 'Homer Multitext' project (hmt is documenting the language and structure of Greek epic poetry, and the ancient tradition of commentary on it. The project’s primary data consist of editions of Greek texts; automated and manually created readings analyze the texts across historical and thematic axes. This paper describes an abstract model we follow in documenting an open-ended body of diverse analyses. The analyses apply to passages of texts at different levels of granularity; they may refer to overlapping or mutually exclusive passages of text; and they may apply to non-contiguous passages of text. All are recorded in with explicit, concise, machine-actionable canonical citation of both text passage and analysis in a scheme aligning all analyses to a common notional text. We cite our texts with urns that capture a passage’s position in an 'Ordered Hierarchy of Citation Objects' (ohco2. Analyses are modeled as data-objects with five properties. We create collections of ‘analytical objects’, each uniquely identified by its own urn and each aligned to a particular edition of a text by a urn citation. We can view these analytical objects as an extension of the edition’s citation hierarchy; since they are explicitly ordered by their alignment with the edition they analyze, each collection of analyses meets satisfies the (ohco2 model of a citable text. We call these texts that are derived from and aligned to an edition ‘analytical exemplars’.

  20. Use of flow models to analyse loss of coolant accidents

    International Nuclear Information System (INIS)

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  1. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  2. SVM models for analysing the headstreams of mine water inrush

    Energy Technology Data Exchange (ETDEWEB)

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  3. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  4. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  5. Integration efficiency for model reduction in micro-mechanical analyses

    Science.gov (United States)

    van Tuijl, Rody A.; Remmers, Joris J. C.; Geers, Marc G. D.

    2017-11-01

    Micro-structural analyses are an important tool to understand material behavior on a macroscopic scale. The analysis of a microstructure is usually computationally very demanding and there are several reduced order modeling techniques available in literature to limit the computational costs of repetitive analyses of a single representative volume element. These techniques to speed up the integration at the micro-scale can be roughly divided into two classes; methods interpolating the integrand and cubature methods. The empirical interpolation method (high-performance reduced order modeling) and the empirical cubature method are assessed in terms of their accuracy in approximating the full-order result. A micro-structural volume element is therefore considered, subjected to four load-cases, including cyclic and path-dependent loading. The differences in approximating the micro- and macroscopic quantities of interest are highlighted, e.g. micro-fluctuations and stresses. Algorithmic speed-ups for both methods with respect to the full-order micro-structural model are quantified. The pros and cons of both classes are thereby clearly identified.

  6. A 1024 channel analyser of model FH 465

    International Nuclear Information System (INIS)

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  7. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  8. Comparison between tests and analyses for ground-foundation models

    International Nuclear Information System (INIS)

    Moriyama, Ken-ichi; Hibino, Hirosi; Izumi, Masanori; Kiya, Yukiharu.

    1991-01-01

    The laboratory tests were carried out on two ground models made of silicone rubber (hard and soft ground models) and a foundation model made of aluminum in order to confirm the embedment effects on soil-structure interaction system experimentally. The detail of the procedure and the results of the test are described in the companion paper. Up till now, the analytical studies on the embedment effect on seismic response of buildings have been performed in recent years and the analysis tools have been used in the seismic design procedure of the nuclear power plant facilities. The embedment effects on soil-structure interaction system are confirmed by the simulation analysis and the verification of analysis tools are investigated through the simulation analysis in this paper. The following conclusions can be drawn from comparison between laboratory test results and analysis results. (1) The effects of embedment, such as increase in the impedance functions and the rotational component of foundation input motions, were clarified by the simulation analyses and laboratory tests. (2) The analysis results of axisymmetric FEM showed good agreement with processed test results by means of the transient response to eliminate the reflected waves and the analysis tools were confirmed experimentally. (3) The excavated portion of the soil affected the foundation input motion rather than the impedance function since there was little difference between the impedance functions obtained by wave propagation theory and those obtained by the axisymmetric FEM and the rotational component of the foundation input motions increased significantly. (J.P.N.)

  9. The Effect of Adherence to Dietary Tracking on Weight Loss: Using HLM to Model Weight Loss over Time

    Directory of Open Access Journals (Sweden)

    John Spencer Ingels

    2017-01-01

    Full Text Available The role of dietary tracking on weight loss remains unexplored despite being part of multiple diabetes and weight management programs. Hence, participants of the Diabetes Prevention and Management (DPM program (12 months, 22 sessions tracked their food intake for the duration of the study. A scatterplot of days tracked versus total weight loss revealed a nonlinear relationship. Hence, the number of possible tracking days was divided to create the 3 groups of participants: rare trackers (66% total days tracked. After controlling for initial body mass index, hemoglobin A1c, and gender, only consistent trackers had significant weight loss (−9.99 pounds, following a linear relationship with consistent loss throughout the year. In addition, the weight loss trend for the rare and inconsistent trackers followed a nonlinear path, with the holidays slowing weight loss and the onset of summer increasing weight loss. These results show the importance of frequent dietary tracking for consistent long-term weight loss success.

  10. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  11. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  12. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  13. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  14. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-02-17

    We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.

  15. Improved analyses using function datasets and statistical modeling

    Science.gov (United States)

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  16. An electrodynamic model to analyse field emission thrusters

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E.; Del Zoppo, R.; Venturini, G.

    1987-12-01

    After a short description of the working principle of field emission thrusters, a surface emission electrodynamic model, capable of describing the required propulsive effects, is shown. The model, developed according to cylindrical geometry, provides one-dimensional differential relations and, therefore, easy resolution. The characteristic curves obtained are graphed. Comparison with experimental data confirms the validity of the proposed model.

  17. A didactic Input-Output model for territorial ecology analyses

    OpenAIRE

    Garry Mcdonald

    2010-01-01

    This report describes a didactic input-output modelling framework created jointly be the team at REEDS, Universite de Versailles and Dr Garry McDonald, Director, Market Economics Ltd. There are three key outputs associated with this framework: (i) a suite of didactic input-output models developed in Microsoft Excel, (ii) a technical report (this report) which describes the framework and the suite of models1, and (iii) a two week intensive workshop dedicated to the training of REEDS researcher...

  18. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Science.gov (United States)

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  19. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  20. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  1. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  2. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    International Nuclear Information System (INIS)

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  3. Plasma-safety assessment model and safety analyses of ITER

    International Nuclear Information System (INIS)

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  4. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  5. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  6. Compound dislocation models (CDMs) for volcano deformation analyses

    Science.gov (United States)

    Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau

    2017-02-01

    Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.

  7. Model analyses for sustainable energy supply under CO2 restrictions

    International Nuclear Information System (INIS)

    Matsuhashi, Ryuji; Ishitani, Hisashi.

    1995-01-01

    This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives

  8. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  9. A Hamiltonian approach to model and analyse networks of ...

    Indian Academy of Sciences (India)

    2015-09-24

    Sep 24, 2015 ... Over the past twelve years, ideas and methods from nonlinear dynamics system theory, in particular, group theoretical methods in bifurcation theory, have been ... In this manuscript, a review of the most recent work on modelling and analysis of two seemingly different systems, an array of gyroscopes and an ...

  10. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...

  11. Capacity allocation in wireless communication networks - models and analyses

    NARCIS (Netherlands)

    Litjens, Remco

    2003-01-01

    This monograph has concentrated on capacity allocation in cellular and Wireless Local Area Networks, primarily with a network operator’s perspective. In the introduc- tory chapter, a reference model has been proposed for the extensive suite of capacity allocation mechanisms that can be applied at

  12. Theoretical modeling and experimental analyses of laminated wood composite poles

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  13. Complex accident scenarios modelled and analysed by Stochastic Petri Nets

    International Nuclear Information System (INIS)

    Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš

    2015-01-01

    This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)

  14. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  15. Analyses of homologous rotavirus infection in the mouse model.

    Science.gov (United States)

    Burns, J W; Krishnaney, A A; Vo, P T; Rouse, R V; Anderson, L J; Greenberg, H B

    1995-02-20

    The group A rotaviruses are significant human and veterinary pathogens in terms of morbidity, mortality, and economic loss. Despite its importance, an effective vaccine remains elusive due at least in part to our incomplete understanding of rotavirus immunity and protection. Both large and small animal model systems have been established to address these issues. One significant drawback of these models is the lack of well-characterized wild-type homologous viruses and their cell culture-adapted variants. We have characterized four strains of murine rotaviruses, EC, EHP, EL, and EW, in the infant and adult mouse model using wild-type isolates and cell culture-adapted variants of each strain. Wild-type murine rotaviruses appear to be equally infectious in infant and adult mice in terms of the intensity and duration of virus shedding following primary infection. Spread of infection to naive cagemates is seen in both age groups. Clearance of shedding following primary infection appears to correlate with the development of virus-specific intestinal IgA. Protective immunity is developed in both infant and adult mice following oral infection as demonstrated by a lack of shedding after subsequent wild-type virus challenge. Cell culture-adapted murine rotaviruses appear to be highly attenuated when administered to naive animals and do not spread efficiently to nonimmune cagemates. The availability of these wild-type and cell culture-adapted virus preparations should allow a more systematic evaluation of rotavirus infection and immunity. Furthermore, future vaccine strategies can be evaluated in the mouse model using several fully virulent homologous viruses for challenge.

  16. Analysing the Competency of Mathematical Modelling in Physics

    OpenAIRE

    Redish, Edward F.

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unso...

  17. A workflow model to analyse pediatric emergency overcrowding.

    Science.gov (United States)

    Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2014-01-01

    The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).

  18. Genomic, Biochemical, and Modeling Analyses of Asparagine Synthetases from Wheat

    Directory of Open Access Journals (Sweden)

    Hongwei Xu

    2018-01-01

    Full Text Available Asparagine synthetase activity in cereals has become an important issue with the discovery that free asparagine concentration determines the potential for formation of acrylamide, a probably carcinogenic processing contaminant, in baked cereal products. Asparagine synthetase catalyses the ATP-dependent transfer of the amino group of glutamine to a molecule of aspartate to generate glutamate and asparagine. Here, asparagine synthetase-encoding polymerase chain reaction (PCR products were amplified from wheat (Triticum aestivum cv. Spark cDNA. The encoded proteins were assigned the names TaASN1, TaASN2, and TaASN3 on the basis of comparisons with other wheat and cereal asparagine synthetases. Although very similar to each other they differed slightly in size, with molecular masses of 65.49, 65.06, and 66.24 kDa, respectively. Chromosomal positions and scaffold references were established for TaASN1, TaASN2, and TaASN3, and a fourth, more recently identified gene, TaASN4. TaASN1, TaASN2, and TaASN4 were all found to be single copy genes, located on chromosomes 5, 3, and 4, respectively, of each genome (A, B, and D, although variety Chinese Spring lacked a TaASN2 gene in the B genome. Two copies of TaASN3 were found on chromosome 1 of each genome, and these were given the names TaASN3.1 and TaASN3.2. The TaASN1, TaASN2, and TaASN3 PCR products were heterologously expressed in Escherichia coli (TaASN4 was not investigated in this part of the study. Western blot analysis identified two monoclonal antibodies that recognized the three proteins, but did not distinguish between them, despite being raised to epitopes SKKPRMIEVAAP and GGSNKPGVMNTV in the variable C-terminal regions of the proteins. The heterologously expressed TaASN1 and TaASN2 proteins were found to be active asparagine synthetases, producing asparagine and glutamate from glutamine and aspartate. The asparagine synthetase reaction was modeled using SNOOPY® software and information from

  19. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    OpenAIRE

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documen...

  20. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    Science.gov (United States)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  1. Numerical analyses of interaction of steel-fibre reinforced concrete slab model with subsoil

    Directory of Open Access Journals (Sweden)

    Jana Labudkova

    2017-01-01

    Full Text Available Numerical analyses of contact task were made with FEM. The test sample for the task was a steel-fibre reinforced concrete foundation slab model loaded during experimental loading test. Application of inhomogeneous half-space was used in FEM analyses. Results of FEM analyses were also confronted with the values measured during the experiment.

  2. Analyses of tumor-suppressor genes in germline mouse models of cancer.

    Science.gov (United States)

    Wang, Jingqiang; Abate-Shen, Cory

    2014-08-01

    Tumor-suppressor genes are critical regulators of growth and functioning of cells, whose loss of function contributes to tumorigenesis. Accordingly, analyses of the consequences of their loss of function in genetically engineered mouse models have provided important insights into mechanisms of human cancer, as well as resources for preclinical analyses and biomarker discovery. Nowadays, most investigations of genetically engineered mouse models of tumor-suppressor function use conditional or inducible alleles, which enable analyses in specific cancer (tissue) types and overcome the consequences of embryonic lethality of germline loss of function of essential tumor-suppressor genes. However, historically, analyses of genetically engineered mouse models based on germline loss of function of tumor-suppressor genes were very important as these early studies established the principle that loss of function could be studied in mouse cancer models and also enabled analyses of these essential genes in an organismal context. Although the cancer phenotypes of these early germline models did not always recapitulate the expected phenotypes in human cancer, these models provided the essential foundation for the more sophisticated conditional and inducible models that are currently in use. Here, we describe these "first-generation" germline models of loss of function models, focusing on the important lessons learned from their analyses, which helped in the design and analyses of "next-generation" genetically engineered mouse models. © 2014 Cold Spring Harbor Laboratory Press.

  3. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  4. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  5. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  6. Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models

    Science.gov (United States)

    Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano

    2017-07-01

    The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.

  7. Random regression analyses using B-splines to model growth of Australian Angus cattle

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2005-09-01

    Full Text Available Abstract Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error.

  8. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  9. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...

  10. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  11. Present status of theories and data analyses of mathematical models for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, Michiaki; Kawaguchi, Isao

    2007-01-01

    Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)

  12. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  13. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  14. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems

    NARCIS (Netherlands)

    Vredenberg, W.J.

    2011-01-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with

  15. Analysing and controlling the tax evasion dynamics via majority-vote model

    International Nuclear Information System (INIS)

    Lima, F W S

    2010-01-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  16. Analysing and controlling the tax evasion dynamics via majority-vote model

    Energy Technology Data Exchange (ETDEWEB)

    Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  17. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    Science.gov (United States)

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  18. A model finite-element to analyse the mechanical behavior of a PWR fuel rod

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Tanajura, C.A.S.

    1988-01-01

    A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt

  19. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  20. Joint analyses model for total cholesterol and triglyceride in human serum with near-infrared spectroscopy

    Science.gov (United States)

    Yao, Lijun; Lyu, Ning; Chen, Jiemei; Pan, Tao; Yu, Jing

    2016-04-01

    The development of a small, dedicated near-infrared (NIR) spectrometer has promising potential applications, such as for joint analyses of total cholesterol (TC) and triglyceride (TG) in human serum for preventing and treating hyperlipidemia of a large population. The appropriate wavelength selection is a key technology for developing such a spectrometer. For this reason, a novel wavelength selection method, named the equidistant combination partial least squares (EC-PLS), was applied to the wavelength selection for the NIR analyses of TC and TG in human serum. A rigorous process based on the various divisions of calibration and prediction sets was performed to achieve modeling optimization with stability. By applying EC-PLS, a model set was developed, which consists of various models that were equivalent to the optimal model. The joint analyses model of the two indicators was further selected with only 50 wavelengths. The random validation samples excluded from the modeling process were used to validate the selected model. The root-mean-square errors, correlation coefficients and ratio of performance to deviation for the prediction were 0.197 mmol L- 1, 0.985 and 5.6 for TC, and 0.101 mmol L- 1, 0.992 and 8.0 for TG, respectively. The sensitivity and specificity for hyperlipidemia were 96.2% and 98.0%. These findings indicate high prediction accuracy and low model complexity. The proposed wavelength selection provided valuable references for the designing of a small, dedicated spectrometer for hyperlipidemia. The methodological framework and optimization algorithm are universal, such that they can be applied to other fields.

  1. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    Science.gov (United States)

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  2. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  3. Groundwater flow analyses in preliminary site investigations. Modelling strategy and computer codes

    International Nuclear Information System (INIS)

    Taivassalo, V.; Koskinen, L.; Meling, K.

    1994-02-01

    The analyses of groundwater flow comprised a part of the preliminary site investigations which were carried out by Teollisuuden Voima Oy (TVO) for five areas in Finland during 1987 -1992. The main objective of the flow analyses was to characterize groundwater flow at the sites. The flow simulations were also used to identify and study uncertainties and inadequacies which are inherent in the results of earlier modelling phases. The flow analyses were performed for flow conditions similar to the present conditions. The modelling approach was based on the concept of an equivalent continuum. Each fracture zone and the rock matrix among the zones was, however, considered separately as a hydrogeologic unit. The numerical calculations were carried out with a computer code package, FEFLOW. The code is based upon the finite element method. With the code two- and one-dimensional elements can also be used by way of embedding them in a three-dimensional element mesh. A set of new algorithms was developed and employed to create element meshes for FEFLOW. The most useful program in the preliminary site investigations was PAAWI, which adds two-dimensional elements for fracture zones to an existing three-dimensional element mesh. The new algorithms reduced significantly the time required to create spatial discretization for complex geometries. Three element meshes were created for each site. The boundaries of the regional models coincide with those of the flow models. (55 refs., 40 figs., 1 tab.)

  4. To transform or not to transform: using generalized linear mixed models to analyse reaction time data

    Science.gov (United States)

    Lo, Steson; Andrews, Sally

    2015-01-01

    Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences. PMID:26300841

  5. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  6. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  7. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    Energy Technology Data Exchange (ETDEWEB)

    Suolanen, V.; Ilvonen, M. [VTT Energy, Espoo (Finland). Nuclear Energy

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m{sup 2}). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an

  8. BWR Mark III containment analyses using a GOTHIC 8.0 3D model

    International Nuclear Information System (INIS)

    Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat

    2015-01-01

    Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences

  9. Computational model for supporting SHM systems design: Damage identification via numerical analyses

    Science.gov (United States)

    Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei

    2017-02-01

    This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.

  10. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  11. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  12. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  13. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    Science.gov (United States)

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Integrated Process Model Development and Systems Analyses for the LIFE Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Meier, W R; Anklam, T; Abbott, R; Erlandson, A; Halsey, W; Miles, R; Simon, A J

    2009-07-15

    We have developed an integrated process model (IPM) for a Laser Inertial Fusion-Fission Energy (LIFE) power plant. The model includes cost and performance algorithms for the major subsystems of the plant, including the laser, fusion target fabrication and injection, fusion-fission chamber (including the tritium and fission fuel blankets), heat transfer and power conversion systems, and other balance of plant systems. The model has been developed in Visual Basic with an Excel spreadsheet user interface in order to allow experts in various aspects of the design to easily integrate their individual modules and provide a convenient, widely accessible platform for conducting the system studies. Subsystem modules vary in level of complexity; some are based on top-down scaling from fission power plant costs (for example, electric plant equipment), while others are bottom-up models based on conceptual designs being developed by LLNL (for example, the fusion-fission chamber and laser systems). The IPM is being used to evaluate design trade-offs, do design optimization, and conduct sensitivity analyses to identify high-leverage areas for R&D. We describe key aspects of the IPM and report on the results of our systems analyses. Designs are compared and evaluated as a function of key design variables such as fusion target yield and pulse repetition rate.

  15. Considerations when loading spinal finite element models with predicted muscle forces from inverse static analyses.

    Science.gov (United States)

    Zhu, Rui; Zander, Thomas; Dreischarf, Marcel; Duda, Georg N; Rohlmann, Antonius; Schmidt, Hendrik

    2013-04-26

    Mostly simplified loads were used in biomechanical finite element (FE) studies of the spine because of a lack of data on muscular physiological loading. Inverse static (IS) models allow the prediction of muscle forces for predefined postures. A combination of both mechanical approaches - FE and IS - appears to allow a more realistic modeling. However, it is unknown what deviations are to be expected when muscle forces calculated for models with rigid vertebrae and fixed centers of rotation, as generally found in IS models, are applied to a FE model with elastic vertebrae and discs. The aim of this study was to determine the effects of these disagreements. Muscle forces were estimated for 20° flexion and 10° extension in an IS model and transferred to a FE model. The effects of the elasticity of bony structures (rigid vs. elastic) and the definition of the center of rotation (fixed vs. non-fixed) were quantified using the deviation of actual intervertebral rotation (IVR) of the FE model and the targeted IVR from the IS model. For extension, the elasticity of the vertebrae had only a minor effect on IVRs, whereas a non-fixed center of rotation increased the IVR deviation on average by 0.5° per segment. For flexion, a combination of the two parameters increased IVR deviation on average by 1° per segment. When loading FE models with predicted muscle forces from IS analyses, the main limitations in the IS model - rigidity of the segments and the fixed centers of rotation - must be considered. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Multi-model finite element scheme for static and free vibration analyses of composite laminated beams

    Directory of Open Access Journals (Sweden)

    U.N. Band

    Full Text Available Abstract A transition element is developed for the local global analysis of laminated composite beams. It bridges one part of the domain modelled with a higher order theory and other with a 2D mixed layerwise theory (LWT used at critical zone of the domain. The use of developed transition element makes the analysis for interlaminar stresses possible with significant accuracy. The mixed 2D model incorporates the transverse normal and shear stresses as nodal degrees of freedom (DOF which inherently ensures continuity of these stresses. Non critical zones are modelled with higher order equivalent single layer (ESL theory leading to the global mesh with multiple models applied simultaneously. Use of higher order ESL in non critical zones reduces the total number of elements required to map the domain. A substantial reduction in DOF as compared to a complete 2D mixed model is obvious. This computationally economical multiple modelling scheme using the transition element is applied to static and free vibration analyses of laminated composite beams. Results obtained are in good agreement with benchmarks available in literature.

  17. A chip-level modeling approach for rail span collapse and survivability analyses

    International Nuclear Information System (INIS)

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  18. Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness

    International Nuclear Information System (INIS)

    Dawson, P.; Paton, A.A.; Fleischer, C.C.

    1990-01-01

    This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)

  19. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    Science.gov (United States)

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  20. Sensitivity analyses of a global flood model in different geoclimatic regions

    Science.gov (United States)

    Moylan, C.; Neal, J. C.; Freer, J. E.; Pianosi, F.; Wagener, T.; Sampson, C. C.; Smith, A.

    2017-12-01

    Flood models producing global hazard maps now exist, although with significant variation in the modelled hazard extent. Besides explicit structural differences, reasons for this variation is unknown. Understanding the behaviour of these global flood models is necessary to determine how they can be further developed. Preliminary sensitivity analysis was performed using Morris method on the Bristol global flood model, which has 37 parameters, required to translate the remotely sensed data into input for the underlying hydrodynamic model. This number of parameters implies an excess of complexity for flood modelling and should ideally be mitigated. The analysis showed an order of magnitude difference in parameter sensitivities, when comparing total flooded extent. It also showed the most important parameters' influence to be highly interactive rather than just direct; there were surprises in expectation of which parameters are the most important. Despite these findings, conclusions about the model are limited due to the fixed geoclimatic features of the location analysed. Hence more locations with varied geoclimatic characteristics must be chosen, so the consistencies and deviations of parameter sensitivities across these features become quantifiable. Locations are selected using a novel sampling technique, which aggregates the input data of a domain into representative metrics of the geoclimatic features, hypothesised to correlate with one or more parameters. Combinations of these metrics are sampled across a range of geoclimatic areas, and the sensitivities found are correlated with the sampled metrics. From this work, we find the main influences on flood risk prediction at the global scale for the used model structure, which as a methodology is transferable to the other global flood models.

  1. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  2. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  3. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-08-01

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.

  4. A conceptual model for analysing informal learning in online social networks for health professionals.

    Science.gov (United States)

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  5. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before...... the inspiration to deliver the form finally presents itself. This was the outlet for our plan to devise a tool for analysing works and the practical development of forms. The tool is a set of cards with suggestions for investigations that may assist the modeller in identifying the weaknesses of the form......, or convince him-/herself about its strengths. The cards also contain aesthetical reflections that may be of inspiration in the development of the form....

  6. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  7. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    BACKGROUND: There is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from...... an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-trial variability and a sampling error estimate considering the required information size. D2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I2), which may underestimate the required information size. Thus, D2 and I2 are compared...

  8. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  9. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  10. Quantifying the Representation Error of Land Biosphere Models using High Resolution Footprint Analyses and UAS Observations

    Science.gov (United States)

    Hanson, C. V.; Schmidt, A.; Law, B. E.; Moore, W.

    2015-12-01

    The validity of land biosphere model outputs rely on accurate representations of ecosystem processes within the model. Typically, a vegetation or land cover type for a given area (several Km squared or larger resolution), is assumed to have uniform properties. The limited spacial and temporal resolution of models prevents resolving finer scale heterogeneous flux patterns that arise from variations in vegetation. This representation error must be quantified carefully if models are informed through data assimilation in order to assign appropriate weighting of model outputs and measurement data. The representation error is usually only estimated or ignored entirely due to the difficulty in determining reasonable values. UAS based gas sensors allow measurements of atmospheric CO2 concentrations with unprecedented spacial resolution, providing a means of determining the representation error for CO2 fluxes empirically. In this study we use three dimensional CO2 concentration data in combination with high resolution footprint analyses in order to quantify the representation error for modelled CO2 fluxes for typical resolutions of regional land biosphere models. CO2 concentration data were collected using an Atlatl X6A hexa-copter, carrying a highly calibrated closed path infra-red gas analyzer based sampling system with an uncertainty of ≤ ±0.2 ppm CO2. Gas concentration data was mapped in three dimensions using the UAS on-board position data and compared to footprints generated using WRF 3.61. Chad Hanson, Oregon State University, Corvallis, OR Andres Schmidt, Oregon State University, Corvallis, OR Bev Law, Oregon State University, Corvallis, OR

  11. Hierarchical Data Structures, Institutional Research, and Multilevel Modeling

    Science.gov (United States)

    O'Connell, Ann A.; Reed, Sandra J.

    2012-01-01

    Multilevel modeling (MLM), also referred to as hierarchical linear modeling (HLM) or mixed models, provides a powerful analytical framework through which to study colleges and universities and their impact on students. Due to the natural hierarchical structure of data obtained from students or faculty in colleges and universities, MLM offers many…

  12. Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Peter L. Oliver

    2011-01-01

    Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.

  13. D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -

    Science.gov (United States)

    Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.

    2011-09-01

    In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to

  14. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  15. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  16. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  17. DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION

    Energy Technology Data Exchange (ETDEWEB)

    VONGARGEN BH

    2009-11-03

    The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon

  18. Noise exposure during pregnancy, birth outcomes and fetal development: meta-analyses using quality effects model.

    Science.gov (United States)

    Dzhambov, Angel M; Dimitrova, Donka D; Dimitrakova, Elena D

    2014-01-01

    Many women are exposed daily to high levels of occupational and residential noise, so the effect of noise exposure on pregnancy should be considered because noise affects both the fetus and the mother herself. However, there is a controversy in the literature regarding the adverse effects of occupational and residential noise on pregnant women and their fetuses. The aim of this study was to conduct systematic review of previously analyzed studies, to add additional information omitted in previous reviews and to perform meta-analyses on the effects of noise exposure on pregnancy, birth outcomes and fetal development. Previous reviews and meta-analyses on the topic were consulted. Additionally, a systematic search in MEDLINE, EMBASE and Internet was carried out. Twenty nine studies were included in the meta-analyses. Quality effects meta-analytical model was applied. Women exposed to high noise levels (in most of the studies ≥ 80 dB) during pregnancy are at a significantly higher risk for having small-for-gestational-age newborn (RR = 1.19, 95% CI: 1.03, 1.38), gestational hypertension (RR = 1.27, 95% CI: 1.03, 1.58) and infant with congenital malformations (RR = 1.47, 95% CI: 1.21, 1.79). The effect was not significant for preeclampsia, perinatal death, spontaneous abortion and preterm birth. The results are consistent with previous findings regarding a higher risk for small-for-gestational-age. They also highlight the significance of residential and occupational noise exposure for developing gestational hypertension and especially congenital malformations.

  19. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  20. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  1. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  2. Integrated optimization analyses of aerodynamic/stealth characteristics of helicopter rotor based on surrogate model

    Directory of Open Access Journals (Sweden)

    Jiang Xiangwen

    2015-06-01

    Full Text Available Based on computational fluid dynamics (CFD method, electromagnetic high-frequency method and surrogate model optimization techniques, an integration design method about aerodynamic/stealth has been established for helicopter rotor. The developed integration design method is composed of three modules: integrated grids generation (the moving-embedded grids for CFD solver and the blade grids for radar cross section (RCS solver are generated by solving Poisson equations and folding approach, aerodynamic/stealth solver (the aerodynamic characteristics are simulated by CFD method based upon Navier–Stokes equations and Spalart–Allmaras (S–A turbulence model, and the stealth characteristics are calculated by using a panel edge method combining the method of physical optics (PO, equivalent currents (MEC and quasi-stationary (MQS, and integrated optimization analysis (based upon the surrogate model optimization technique with full factorial design (FFD and radial basis function (RBF, an integrated optimization analyses on aerodynamic/stealth characteristics of rotor are conducted. Firstly, the scattering characteristics of the rotor with different blade-tip swept and twist angles have been carried out, then time–frequency domain grayscale with strong scattering regions of rotor have been given. Meanwhile, the effects of swept-tip and twist angles on the aerodynamic characteristic of rotor have been performed. Furthermore, by choosing suitable object function and constraint condition, the compromised design about swept and twist combinations of rotor with high aerodynamic performances and low scattering characteristics has been given at last.

  3. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  4. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Directory of Open Access Journals (Sweden)

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  5. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  6. Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy

    International Nuclear Information System (INIS)

    Moeller, Bernd

    2011-01-01

    Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for

  7. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  8. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  9. Developing a system dynamics model to analyse environmental problem in construction site

    Science.gov (United States)

    Haron, Fatin Fasehah; Hawari, Nurul Nazihah

    2017-11-01

    This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.

  10. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  11. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    Science.gov (United States)

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development. Copyright © 2015. Published by Elsevier Inc.

  12. Individual-level space-time analyses of emergency department data using generalized additive modeling

    Directory of Open Access Journals (Sweden)

    Vieira Verónica M

    2012-08-01

    Full Text Available Abstract Background Although daily emergency department (ED data is a source of information that often includes residence, its potential for space-time analyses at the individual level has not been fully explored. We propose that ED data collected for surveillance purposes can also be used to inform spatial and temporal patterns of disease using generalized additive models (GAMs. This paper describes the methods for adapting GAMs so they can be applied to ED data. Methods GAMs are an effective approach for modeling spatial and temporal distributions of point-wise data, producing smoothed surfaces of continuous risk while adjusting for confounders. In addition to disease mapping, the method allows for global and pointwise hypothesis testing and selection of statistically optimum degree of smoothing using standard statistical software. We applied a two-dimensional GAM for location to ED data of overlapping calendar time using a locally-weighted regression smoother. To illustrate our methods, we investigated the association between participants’ address and the risk of gastrointestinal illness in Cape Cod, Massachusetts over time. Results The GAM space-time analyses simultaneously smooth in units of distance and time by using the optimum degree of smoothing to create data frames of overlapping time periods and then spatially analyzing each data frame. When resulting maps are viewed in series, each data frame contributes a movie frame, allowing us to visualize changes in magnitude, geographic size, and location of elevated risk smoothed over space and time. In our example data, we observed an underlying geographic pattern of gastrointestinal illness with risks consistently higher in the eastern part of our study area over time and intermittent variations of increased risk during brief periods. Conclusions Spatial-temporal analysis of emergency department data with GAMs can be used to map underlying disease risk at the individual-level and view

  13. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  14. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  15. Interpreting Hierarchical Linear and Hierarchical Generalized Linear Models with Slopes as Outcomes

    Science.gov (United States)

    Tate, Richard

    2004-01-01

    Current descriptions of results from hierarchical linear models (HLM) and hierarchical generalized linear models (HGLM), usually based only on interpretations of individual model parameters, are incomplete in the presence of statistically significant and practically important "slopes as outcomes" terms in the models. For complete description of…

  16. Modeling and analysing storage systems in agricultural biomass supply chain for cellulosic ethanol production

    International Nuclear Information System (INIS)

    Ebadian, Mahmood; Sowlati, Taraneh; Sokhansanj, Shahab; Townley-Smith, Lawrence; Stumborg, Mark

    2013-01-01

    Highlights: ► Studied the agricultural biomass supply chain for cellulosic ethanol production. ► Evaluated the impact of storage systems on different supply chain actors. ► Developed a combined simulation/optimization model to evaluate storage systems. ► Compared two satellite storage systems with roadside storage in terms of costs and emitted CO 2 . ► SS would lead to a more cost-efficient supply chain compared to roadside storage. -- Abstract: In this paper, a combined simulation/optimization model is developed to better understand and evaluate the impact of the storage systems on the costs incurred by each actor in the agricultural biomass supply chain including farmers, hauling contractors and the cellulosic ethanol plant. The optimization model prescribes the optimum number and location of farms and storages. It also determines the supply radius, the number of farms required to secure the annual supply of biomass and also the assignment of farms to storage locations. Given the specific design of the supply chain determined by the optimization model, the simulation model determines the number of required machines for each operation, their daily working schedule and utilization rates, along with the capacities of storages. To evaluate the impact of the storage systems on the delivered costs, three storage systems are molded and compared: roadside storage (RS) system and two satellite storage (SS) systems including SS with fixed hauling distance (SF) and SS with variable hauling distance (SV). In all storage systems, it is assumed the loading equipment is dedicated to storage locations. The obtained results from a real case study provide detailed cost figures for each storage system since the developed model analyses the supply chain on an hourly basis and considers time-dependence and stochasticity of the supply chain. Comparison of the storage systems shows SV would outperform SF and RS by reducing the total delivered cost by 8% and 6%, respectively

  17. CERES model application for increasing preparedness to climate variability in agricultural planning—risk analyses

    Science.gov (United States)

    Popova, Zornitsa; Kercheva, Milena

    The role of soil, crop, climate and crop management for year-to-year variation of yield and groundwater pollution was quantified by simulation analyses with CERES-maize and CERES-wheat models over a 30-year period for four “soil-crop” combinations. It was established that “Chromic Luvisol-maize-dry land” combination was associated with the greatest coefficient of variability of yields ( Cv = 43%) and drought frequency (in 22 years with yield losses more than 20%) over the analysed period. Average yield losses in dry vegetation seasons were 60% of maize productivity potential under sufficient soil moisture. Traditional and drainage controlling precise irrigation scheduling mitigated drought consequences by reducing year-to-year variability of yield to Cv = 5.6-11.6% on risky Chromic Luvisol. Long-term wheat yields were much more stable ( Cv = 23-26%) than those of maize on Chromic Luvisol. In this case droughts covered 12 of the studied 30 years in which yield losses were 25-30% on the average. Soils of high water holding capacity (as Vertisol) stored 50-150 mm additional precipitation for crop evapotranspiration and thus reduced frequency of drought under both crops to 6-7 cases in 30 years. Agriculture should be more sustainable on this soil since variability of yield dropped to Cv = 13% for wheat and respectively Cv = 21% for maize. As a result Vertisol mitigated yield losses during dry vegetation periods by 10-15% for wheat and 22% for maize if compared with productivity under sufficient soil water. Thirty-year frequency analyses of seasonal nitrogen (N)-leaching, proved that ten of wheat and only one of maize vegetation seasons were susceptible to significant (10-45 kg N/ha/year) ground water pollution on Chromic Luvisol. Simulated precise irrigation scenario did not influence drainage in vegetation period. Another risky situations occurred under maize in the wettest fallow state after extremely dry vegetation (in one more of the studied years) when up

  18. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  19. Using an operating cost model to analyse the selection of aircraft type on short-haul routes

    CSIR Research Space (South Africa)

    Ssamula, B

    2006-08-01

    Full Text Available operating cost model to analyse suitable aircraft choices, for short haul routes, in terms of cost-related parameters, for aircraft commonly used within Africa. In this paper all the parameters that are crucial in analysing a transport service are addressed...

  20. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  1. Molecular approaches for viable bacterial population and transcriptional analyses in a rodent model of dental caries

    Science.gov (United States)

    Klein, Marlise I.; Scott-Anne, Kathleen M.; Gregoire, Stacy; Rosalen, Pedro L.; Koo, Hyun

    2012-01-01

    SUMMARY Culturing methods are the primary approach for microbiological analysis of plaque-biofilms in rodent models of dental caries. In this study, we developed strategies for isolation of DNA and RNA from in vivo formed plaque-biofilms to analyze the viable bacterial population and gene expression. Plaque-biofilm samples from rats were treated with propidium monoazide to isolate DNA from viable cells, and the purified DNA was used to quantify total bacteria and S. mutans population via qPCR and specific primers; the same samples were also analyzed by colony forming unit (CFU) counting. In parallel, RNA was isolated from plaque-biofilm samples (from same animals) and used for transcriptional analyses via RT-qPCR. The viable population of both S. mutans and total bacteria assessed by qPCR were positively correlated with the CFU data (P0.8). However, the qPCR data showed higher bacterial cell counts, particularly for total bacteria (vs. CFU). Moreover, S. mutans proportion in the plaque-biofilm determined by qPCR analysis showed strong correlation with incidence of smooth-surface caries (P=0.0022, r=0.71). The purified RNAs presented high RNA integrity numbers (>7), which allowed measurement of the expression of genes that are critical for S. mutans virulence (e.g. gtfB and gtfC). Our data show that the viable microbial population and the gene expression can be analyzed simultaneously, providing a global assessment of the infectious aspect of the disease dental caries. Our approach could enhance the value of the current rodent model in further understanding the pathophysiology of this disease and facilitating the exploration of novel anti-caries therapies. PMID:22958384

  2. VOC composition of current motor vehicle fuels and vapors, and collinearity analyses for receptor modeling.

    Science.gov (United States)

    Chin, Jo-Yu; Batterman, Stuart A

    2012-03-01

    The formulation of motor vehicle fuels can alter the magnitude and composition of evaporative and exhaust emissions occurring throughout the fuel cycle. Information regarding the volatile organic compound (VOC) composition of motor fuels other than gasoline is scarce, especially for bioethanol and biodiesel blends. This study examines the liquid and vapor (headspace) composition of four contemporary and commercially available fuels: gasoline (gasoline), ultra-low sulfur diesel (ULSD), and B20 (20% soy-biodiesel and 80% ULSD). The composition of gasoline and E85 in both neat fuel and headspace vapor was dominated by aromatics and n-heptane. Despite its low gasoline content, E85 vapor contained higher concentrations of several VOCs than those in gasoline vapor, likely due to adjustments in its formulation. Temperature changes produced greater changes in the partial pressures of 17 VOCs in E85 than in gasoline, and large shifts in the VOC composition. B20 and ULSD were dominated by C(9) to C(16)n-alkanes and low levels of the aromatics, and the two fuels had similar headspace vapor composition and concentrations. While the headspace composition predicted using vapor-liquid equilibrium theory was closely correlated to measurements, E85 vapor concentrations were underpredicted. Based on variance decomposition analyses, gasoline and diesel fuels and their vapors VOC were distinct, but B20 and ULSD fuels and vapors were highly collinear. These results can be used to estimate fuel related emissions and exposures, particularly in receptor models that apportion emission sources, and the collinearity analysis suggests that gasoline- and diesel-related emissions can be distinguished. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Modeling ecological drivers in marine viral communities using comparative metagenomics and network analyses.

    Science.gov (United States)

    Hurwitz, Bonnie L; Westveld, Anton H; Brum, Jennifer R; Sullivan, Matthew B

    2014-07-22

    Long-standing questions in marine viral ecology are centered on understanding how viral assemblages change along gradients in space and time. However, investigating these fundamental ecological questions has been challenging due to incomplete representation of naturally occurring viral diversity in single gene- or morphology-based studies and an inability to identify up to 90% of reads in viral metagenomes (viromes). Although protein clustering techniques provide a significant advance by helping organize this unknown metagenomic sequence space, they typically use only ∼75% of the data and rely on assembly methods not yet tuned for naturally occurring sequence variation. Here, we introduce an annotation- and assembly-free strategy for comparative metagenomics that combines shared k-mer and social network analyses (regression modeling). This robust statistical framework enables visualization of complex sample networks and determination of ecological factors driving community structure. Application to 32 viromes from the Pacific Ocean Virome dataset identified clusters of samples broadly delineated by photic zone and revealed that geographic region, depth, and proximity to shore were significant predictors of community structure. Within subsets of this dataset, depth, season, and oxygen concentration were significant drivers of viral community structure at a single open ocean station, whereas variability along onshore-offshore transects was driven by oxygen concentration in an area with an oxygen minimum zone and not depth or proximity to shore, as might be expected. Together these results demonstrate that this highly scalable approach using complete metagenomic network-based comparisons can both test and generate hypotheses for ecological investigation of viral and microbial communities in nature.

  4. Inverse analyses of effective diffusion parameters relevant for a two-phase moisture model of cementitious materials

    DEFF Research Database (Denmark)

    Addassi, Mouadh; Johannesson, Björn; Wadsö, Lars

    2018-01-01

    Here we present an inverse analyses approach to determining the two-phase moisture transport properties relevant to concrete durability modeling. The purposed moisture transport model was based on a continuum approach with two truly separate equations for the liquid and gas phase being connected...

  5. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  6. Experimental and modeling analyses for interactions between graphene oxide and quartz sand.

    Science.gov (United States)

    Kang, Jin-Kyu; Park, Jeong-Ann; Yi, In-Geol; Kim, Song-Bae

    2017-03-21

    The aim of this study was to quantify the interactions between graphene oxide (GO) and quartz sand by conducting experimental and modeling analyses. The results show that both GO and quartz sand were negatively charged in the presence of 0-50 mM NaCl and 5 mM CaCl 2 (GO = -43.10 to -17.60 mV, quartz sand = -40.97 to -8.44 mV). In the Derjaguin-Landau-Verwey-Overbeek (DLVO) energy profiles, the adhesion of GO to quartz sand becomes more favorable with increasing NaCl concentration from 0 to 10 mM because the interaction energy profile was compressed and the primary maximum energy barrier was lowered. At 50 mM NaCl and 5 mM CaCl 2 , the primary maximum energy barrier even disappeared, resulting in highly favorable conditions for GO retention to quartz sand. In the Maxwell model analysis, the probability of GO adhesion to quartz sand (α m ) increased from 2.46 × 10 -4 to 9.98 × 10 -1 at ionic strengths of 0-10 mM NaCl. In the column experiments (column length = 10 cm, inner diameter = 2.5 cm, flow rate = 0.5 mL min -1 ), the mass removal (Mr) of GO in quartz sand increased from 5.4% to 97.8% as the NaCl concentration was increased from 0 to 50 mM, indicating that the mobility of GO was high in low ionic strength solutions and decreased with increasing ionic strength. The Mr value of GO at 5 mM CaCl 2 was 100%, demonstrating that Ca 2+ had a much stronger effect than Na + on the mobility of GO. In addition, the mobility of GO was lower than that of chloride (Mr = 1.4%) but far higher than that of multi-walled carbon nanotubes (Mr = 87.0%) in deionized water. In aluminum oxide-coated sand, the Mr value of GO was 98.1% at 0 mM NaCl, revealing that the mobility of GO was reduced in the presence of metal oxides. The transport model analysis indicates that the value of the dimensionless attachment rate coefficient (D a ) increased from 0.11 to 4.47 as the NaCl concentration was increased from 0 to 50 mM. In the colloid filtration model analysis, the

  7. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  8. Environmental regulation impacts on international trade: aggregate and sectoral analyses with a bilateral trade flow model

    NARCIS (Netherlands)

    van Beers, C.; van den Bergh, J.C.J.M.

    2003-01-01

    An important barrier to the implementation of strict environmental regulations is that they are perceived to negatively affect a country's competitiveness, visible through changes in international trade. Whereas theoretical analyses of trade and the environment indicate that relatively strict

  9. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  10. Mechanical properties of structural materials in HLM

    International Nuclear Information System (INIS)

    Moisa, A. E.; Valeca, S.; Pitigoi, V.

    2016-01-01

    The Generation IV nuclear systems are nowadays in the design stage, and this is one of the reasons of testing stage for candidate materials. The purpose of this paper is to present the tensile tests, for candidate materials. The studied test are: on temperature of 500°C in air, on mechanical testing machine Walter + Bie by using the furnace of the testing machine, and environmental molten lead using testing machine Instron, equipped with a lead testing device attached to it. Also the mechanical parameters will be determined on tensile strength and yield strength for steel 316L material to be used as candidate in achieving LFR reactor vessel type, and the microstructural analysis of surface breaking will be performed by electronic microscopy. The paper will present the main components, the operating procedure of the testing system, and the results of tensile tests in molten lead. (authors)

  11. From intermediate to final behavioral endpoints; Modeling cognitions in (cost-)effectiveness analyses in health promotion

    NARCIS (Netherlands)

    Prenger, Hendrikje Cornelia

    2012-01-01

    Cost-effectiveness analyses (CEAs) are considered an increasingly important tool in health promotion and psychology. In health promotion adequate effectiveness data of innovative interventions are often lacking. In case of many promising interventions the available data are inadequate for CEAs due

  12. Developing computational model-based diagnostics to analyse clinical chemistry data

    NARCIS (Netherlands)

    Schalkwijk, D.B. van; Bochove, K. van; Ommen, B. van; Freidig, A.P.; Someren, E.P. van; Greef, J. van der; Graaf, A.A. de

    2010-01-01

    This article provides methodological and technical considerations to researchers starting to develop computational model-based diagnostics using clinical chemistry data.These models are of increasing importance, since novel metabolomics and proteomics measuring technologies are able to produce large

  13. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives

    In Rwanda, land degradation contributes to the

  14. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives In Rwanda, land degradation contributes to the low and

  15. Reference model for measuring and analysing costs – particularly in business informatics

    OpenAIRE

    Milos Maryska

    2010-01-01

    This paper is devoted to problems of management of cost efficiency of business informatics with Business Intelligence (BI) assistance. It defines basic critical points that must be taken into account during creating models for management of cost efficiency of business informatics. It proposes new model for management of cost efficiency, this model include also definitions of dimensions and indicators for measuring of this cost efficiency. The model takes into account requirements that pose cl...

  16. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate cel...

  17. Phenology in Germany in the 20th century : methods, analyses and models

    Science.gov (United States)

    Schaber, Jörg

    2002-07-01

    locally combined time series increasing the available data for model development. Apart from analyzed protocolling errors, microclimatic site influences, genetic variation and the observers were identified as sources of uncertainty of phenological observational data. It was concluded that 99% of all phenological observations at a certain site will vary within approximately 24 days around the parametric mean. This supports to the proposed 30-day rule to detect outliers. New phenology models that predict local BB from daily temperature time series were developed. These models were based on simple interactions between inhibitory and promotory agents that are assumed to control the developmental status of a plant. Apart from the fact that, in general, the new models fitted and predicted the observations better than classical models, the main modeling results were: - The bias of the classical models, i.e. overestimation of early observations and underestimation of late observations, could be reduced but not completely removed. - The different favored model structures for each species indicated that for the late spring phases photoperiod played a more dominant role than for early spring phases. - Chilling only plays a subordinate role for spring BB compared to temperatures directly preceding BB. Die Länge der Vegetationsperiode (VP) spielt eine zentrale Rolle für die interannuelle Variation der Kohlenstoffspeicherung terrestrischer Ökosysteme. Die Analyse von Beobachtungsdaten hat gezeigt, dass sich die VP in den letzten Jahrzehnten in den nördlichen Breiten verlängert hat. Dieses Phänomen wurde oft im Zusammenhang mit der globalen Erwärmung diskutiert, da die Phänologie von der Temperatur beeinflusst wird. Die Analyse der Pflanzenphänologie in Süddeutschland im 20. Jahrhundert zeigte: - Die starke Verfrühung der Frühjahrsphasen in dem Jahrzehnt vor 1999 war kein singuläres Ereignis im 20. Jahrhundert. Schon in früheren Dekaden gab es ähnliche Trends. Es konnten

  18. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  19. Economical analyses of build-operate-transfer model in establishing alternative power plants

    International Nuclear Information System (INIS)

    Yumurtaci, Zehra; Erdem, Hasan Hueseyin

    2007-01-01

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model

  20. A Shell/3D Modeling Technique for the Analyses of Delaminated Composite Laminates

    Science.gov (United States)

    Krueger, Ronald; OBrien, T. Kevin

    2001-01-01

    A shell/3D modeling technique was developed for which a local three-dimensional solid finite element model is used only in the immediate vicinity of the delamination front. The goal was to combine the accuracy of the full three-dimensional solution with the computational efficiency of a plate or shell finite element model. Multi-point constraints provided a kinematically compatible interface between the local three-dimensional model and the global structural model which has been meshed with plate or shell finite elements. Double Cantilever Beam (DCB), End Notched Flexure (ENF), and Single Leg Bending (SLB) specimens were modeled using the shell/3D technique to study the feasibility for pure mode I (DCB), mode II (ENF) and mixed mode I/II (SLB) cases. Mixed mode strain energy release rate distributions were computed across the width of the specimens using the virtual crack closure technique. Specimens with a unidirectional layup and with a multidirectional layup where the delamination is located between two non-zero degree plies were simulated. For a local three-dimensional model, extending to a minimum of about three specimen thicknesses on either side of the delamination front, the results were in good agreement with mixed mode strain energy release rates obtained from computations where the entire specimen had been modeled with solid elements. For large built-up composite structures modeled with plate elements, the shell/3D modeling technique offers a great potential for reducing the model size, since only a relatively small section in the vicinity of the delamination front needs to be modeled with solid elements.

  1. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  2. Integrated freight network model : a GIS-based platform for transportation analyses.

    Science.gov (United States)

    2015-01-01

    The models currently used to examine the behavior transportation systems are usually mode-specific. That is, they focus on a single mode (i.e. railways, highways, or waterways). The lack of : integration limits the usefulness of models to analyze the...

  3. Evidence to Support the Componential Model of Creativity: Secondary Analyses of Three Studies.

    Science.gov (United States)

    Conti, Regina; And Others

    1996-01-01

    Three studies with overlapping participant populations evaluated Amabile's componential model of creativity, which postulates three major creativity components: (1) skills specific to the task domain, (2) general (cross-domain) creativity-relevant skills, and (3) task motivation. Findings of the three studies support Amabile's model. (DB)

  4. Analysing empowerment-oriented email consultation for parents : development of the Guiding the Empowerment Process model

    NARCIS (Netherlands)

    dr. Christa C.C. Nieuwboer

    2014-01-01

    Online consultation is increasingly offered by parenting practitioners, but it is not clear if it is feasible to provide empowerment-oriented support in a single session email consultation. Based on the empowerment theory, we developed the Guiding the Empowerment Process model (GEP model) to

  5. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    Science.gov (United States)

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  6. Solving scheduling problems by untimed model checking. The clinical chemical analyser case study

    NARCIS (Netherlands)

    Margaria, T.; Wijs, Anton J.; Massink, M.; van de Pol, Jan Cornelis; Bortnik, Elena M.

    2009-01-01

    In this article, we show how scheduling problems can be modelled in untimed process algebra, by using special tick actions. A minimal-cost trace leading to a particular action, is one that minimises the number of tick steps. As a result, we can use any (timed or untimed) model checking tool to find

  7. Stochastic Spatio-Temporal Models for Analysing NDVI Distribution of GIMMS NDVI3g Images

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2017-01-01

    Full Text Available The normalized difference vegetation index (NDVI is an important indicator for evaluating vegetation change, monitoring land surface fluxes or predicting crop models. Due to the great availability of images provided by different satellites in recent years, much attention has been devoted to testing trend changes with a time series of NDVI individual pixels. However, the spatial dependence inherent in these data is usually lost unless global scales are analyzed. In this paper, we propose incorporating both the spatial and the temporal dependence among pixels using a stochastic spatio-temporal model for estimating the NDVI distribution thoroughly. The stochastic model is a state-space model that uses meteorological data of the Climatic Research Unit (CRU TS3.10 as auxiliary information. The model will be estimated with the Expectation-Maximization (EM algorithm. The result is a set of smoothed images providing an overall analysis of the NDVI distribution across space and time, where fluctuations generated by atmospheric disturbances, fire events, land-use/cover changes or engineering problems from image capture are treated as random fluctuations. The illustration is carried out with the third generation of NDVI images, termed NDVI3g, of the Global Inventory Modeling and Mapping Studies (GIMMS in continental Spain. This data are taken in bymonthly periods from January 2011 to December 2013, but the model can be applied to many other variables, countries or regions with different resolutions.

  8. Factor analyses of the Hospital Anxiety and Depression Scale: a Bayesian structural equation modeling approach.

    Science.gov (United States)

    Fong, Ted Chun Tat; Ho, Rainbow Tin Hung

    2013-12-01

    The latent structure of the Hospital Anxiety and Depression Scale (HADS) has caused inconsistent results in the literature. The HADS is frequently analyzed via maximum likelihood confirmatory factor analysis (ML-CFA). However, the overly restrictive assumption of exact zero cross-loadings and residual correlations in ML-CFA can lead to poor model fits and distorted factor structures. This study applied Bayesian structural equation modeling (BSEM) to evaluate the latent structure of the HADS. Three a priori models, the two-factor, three-factor, and bifactor models, were investigated in a Chinese community sample (N = 312) and clinical sample (N = 198) using ML-CFA and BSEM. BSEM specified approximate zero cross-loadings and residual correlations through the use of zero-mean, small-variance informative priors. The model comparison was based on the Bayesian information criterion (BIC). Using ML-CFA, none of the three models provided an adequate fit for either sample. The BSEM two-factor model with approximate zero cross-loadings and residual correlations fitted both samples well with the lowest BIC of the three models and displayed a simple and parsimonious factor-loading pattern. The study demonstrated that the two-factor structure fitted the HADS well, suggesting its usefulness in assessing the symptoms of anxiety and depression in clinical practice. BSEM is a sophisticated and flexible statistical technique that better reflects substantive theories and locates the source of model misfit. Future use of BSEM is recommended to evaluate the latent structure of other psychological instruments.

  9. Analyses of precooling parameters for a bottom flooding ECCS rewetting velocity model

    International Nuclear Information System (INIS)

    Chun, M.H.

    1981-01-01

    An extension work of the previous paper on the rewetting velocity model is presented. Application of the rewetting velocity model presented elsewhere requires a priori values of phi. In the absence of phi values, film boiling heat transfer coefficient (hsub(df)) and fog-film length (1) data are needed. To provide these informations, a modified Bromley's correlation is first derived and used to obtain hsub(df) values at higher pressure conditions. In addition, the analysis of the precooling parameters, such as phi and 1 is further extended using much more expensive PWR FLECHT data. Thus, the applicable range of the rewetting velocity model is further expanded in this work. (author)

  10. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  11. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  12. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A.K.; Basu, S.K.; Kumar, R.; Sarkar, A.

    NCMRWF (National Centre for Medium Range Weather Forecast) winds assimilated with MSMR (Multi-channel Scanning Microwave Radiometer) winds are used as input to MIKE21 Offshore Spectral Wave model (OSW) which takes into account wind induced wave...

  13. Reference model for measuring and analysing costs – particularly in business informatics

    Directory of Open Access Journals (Sweden)

    Milos Maryska

    2010-04-01

    Full Text Available This paper is devoted to problems of management of cost efficiency of business informatics with Business Intelligence (BI assistance. It defines basic critical points that must be taken into account during creating models for management of cost efficiency of business informatics. It proposes new model for management of cost efficiency, this model include also definitions of dimensions and indicators for measuring of this cost efficiency. The model takes into account requirements that pose claim on management of cost efficiency necessitated by accounting and requirements from consultancy with managers of companies. It also takes into account requirement of methodologies for management business informatics and methods and processes for evaluating and measuring business informatics. This methodologies, methods and processes are transformed into procedures that are appropriate for measuring and evaluating business informatics. This model is intended for monitoring of actual situation, evolution cost efficiency of business informatics and it can be used for making decisions about convenience of outsourcing of business informatics. There are presented some examples from presentation level of mentioned model for measuring of cost efficiency of business informatics. This presentation level is in the form of tables and also in the form of dashboards.

  14. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Directory of Open Access Journals (Sweden)

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  16. Thermo-mechanical analyses and model validation in the HAW test field. Final report

    International Nuclear Information System (INIS)

    Heijdra, J.J.; Broerse, J.; Prij, J.

    1995-01-01

    An overview is given of the thermo-mechanical analysis work done for the design of the High Active Waste experiment and for the purpose of validation of the used models through comparison with experiments. A brief treatise is given on the problems of validation of models used for the prediction of physical behaviour which cannot be determined with experiments. The analysis work encompasses investigations into the initial state of stress in the field, the constitutive relations, the temperature rise, and the pressure on the liner tubes inserted in the field to guarantee the retrievability of the radioactive sources used for the experiment. The measurements of temperatures, deformations, and stresses are described and an evaluation is given of the comparison of measured and calculated data. An attempt has been made to qualify or even quantify the discrepancies, if any, between measurements and calculations. It was found that the model for the temperature calculations performed adequately. For the stresses the general tendency was good, however, large discrepancies exist mainly due to inaccuracies in the measurements. For the deformations again the general tendency of the model predictions was in accordance with the measurements. However, from the evaluation it appears that in spite of the efforts to estimate the correct initial rock pressure at the location of the experiment, this pressure has been underestimated. The evaluation has contributed to a considerable increase in confidence in the models and gives no reason to question the constitutive model for rock salt. However, due to the quality of the measurements of the stress and the relatively short period of the experiments no quantitatively firm support for the constitutive model is acquired. Collections of graphs giving the measured and calculated data are attached as appendices. (orig.)

  17. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    Directory of Open Access Journals (Sweden)

    Megan A Cummins

    2014-03-01

    Full Text Available Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP. The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz and fast (2 Hz rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of

  18. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    International Nuclear Information System (INIS)

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects

  19. Human Atrial Cell Models to Analyse Haemodialysis-Related Effects on Cardiac Electrophysiology: Work in Progress

    Directory of Open Access Journals (Sweden)

    Elisa Passini

    2014-01-01

    Full Text Available During haemodialysis (HD sessions, patients undergo alterations in the extracellular environment, mostly concerning plasma electrolyte concentrations, pH, and volume, together with a modification of sympathovagal balance. All these changes affect cardiac electrophysiology, possibly leading to an increased arrhythmic risk. Computational modeling may help to investigate the impact of HD-related changes on atrial electrophysiology. However, many different human atrial action potential (AP models are currently available, all validated only with the standard electrolyte concentrations used in experiments. Therefore, they may respond in different ways to the same environmental changes. After an overview on how the computational approach has been used in the past to investigate the effect of HD therapy on cardiac electrophysiology, the aim of this work has been to assess the current state of the art in human atrial AP models, with respect to the HD context. All the published human atrial AP models have been considered and tested for electrolytes, volume changes, and different acetylcholine concentrations. Most of them proved to be reliable for single modifications, but all of them showed some drawbacks. Therefore, there is room for a new human atrial AP model, hopefully able to physiologically reproduce all the HD-related effects. At the moment, work is still in progress in this specific field.

  20. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    Science.gov (United States)

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  1. The Computational Fluid Dynamics Analyses on Hemodynamic Characteristics in Stenosed Arterial Models

    Directory of Open Access Journals (Sweden)

    Yue Zhou

    2018-01-01

    Full Text Available Arterial stenosis plays an important role in the progressions of thrombosis and stroke. In the present study, a standard axisymmetric tube model of the stenotic artery is introduced and the degree of stenosis η is evaluated by the area ratio of the blockage to the normal vessel. A normal case (η=0 and four stenotic cases of η=0.25, 0.5, 0.625, and 0.75 with a constant Reynolds number of 300 are simulated by computational fluid dynamics (CFD, respectively, with the Newtonian and Carreau models for comparison. Results show that for both models, the poststenotic separation vortex length increases exponentially with the growth of stenosis degree. However, the vortex length of the Carreau model is shorter than that of the Newtonian model. The artery narrowing accelerates blood flow, which causes high blood pressure and wall shear stress (WSS. The pressure drop of the η=0.75 case is nearly 8 times that of the normal value, while the WSS peak at the stenosis region of η=0.75 case even reaches up to 15 times that of the normal value. The present conclusions are of generality and contribute to the understanding of the dynamic mechanisms of artery stenosis diseases.

  2. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  3. Latent Variable Modelling and Item Response Theory Analyses in Marketing Research

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available Item Response Theory (IRT is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing. Item responses are treated as the outcome (dependent variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent variables. IRT models the relationship between a respondent’s trait (ability, attitude and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.

  4. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    Light rail transit and bus rapid transit have shown to be efficient and cost-effective in improving public transport systems in cities around the world. As these systems comprise various elements, which can be tailored to any given setting, e.g. pre-board fare-collection, holding strategies...... a mesoscopic model which makes it possible to evaluate public transport operations in details, including dwell times, intelligent traffic signal timings and holding strategies while modelling impacts from other traffic using statistical distributional data thereby ensuring simplicity in use and fast...... and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...

  5. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  6. Features and analyses of W7-X cryostat system FE model

    Energy Technology Data Exchange (ETDEWEB)

    Eeten, Paul van, E-mail: paul.van.eeten@ipp.mpg.de; Bräuer, Torsten; Bykov, Victor; Carls, Andre; Fellinger, Joris; Kallmeyer, J.P.

    2015-10-15

    The Wendelstein 7-X stellarator is presently under construction at the Max-Planck-Institute for Plasma Physics in Greifswald with the goal to verify that a stellarator magnetic confinement concept is a viable option for a fusion power plant. The main components of the W7-X cryostat system are the plasma vessel (PV), outer vessel (OV), ports, thermal insulation, vessel supports and the machine base (MB). The main task of the cryostat system is to provide an insulating vacuum for the cryogenic magnet system while allowing external access to the PV through ports for diagnostic, supply and heating systems. The cryostat is subjected to different types of loads during assembly, maintenance and operation. This ranges from basic weight loads from all installed components to mechanical, vacuum and thermal loads. To predict the behavior of the cryostat in terms of deformations, stresses and support load distribution a finite element (FE) global model has been created called the Global Model of the Cryostat System (GMCS). A complete refurbishment of the GM CS has been done in the last 2 years to prepare the model for future applications. This involved a complete mesh update of the model, an improvement of many model features, an update of the applied operational loads and boundary conditions as well as the creation of automatic post processing procedures. Currently the GMCS is used to support several significant assembly and commissioning steps of W7-X that involve the cryostat system, e.g. the removal of temporary supports beneath the MB, transfer of the PV from temporary to the final supports and evacuation of the cryostat. In the upcoming months the model will be used for further support of the commissioning of W7-X which includes the first evacuation of the PV.

  7. Features and analyses of W7-X cryostat system FE model

    International Nuclear Information System (INIS)

    Eeten, Paul van; Bräuer, Torsten; Bykov, Victor; Carls, Andre; Fellinger, Joris; Kallmeyer, J.P.

    2015-01-01

    The Wendelstein 7-X stellarator is presently under construction at the Max-Planck-Institute for Plasma Physics in Greifswald with the goal to verify that a stellarator magnetic confinement concept is a viable option for a fusion power plant. The main components of the W7-X cryostat system are the plasma vessel (PV), outer vessel (OV), ports, thermal insulation, vessel supports and the machine base (MB). The main task of the cryostat system is to provide an insulating vacuum for the cryogenic magnet system while allowing external access to the PV through ports for diagnostic, supply and heating systems. The cryostat is subjected to different types of loads during assembly, maintenance and operation. This ranges from basic weight loads from all installed components to mechanical, vacuum and thermal loads. To predict the behavior of the cryostat in terms of deformations, stresses and support load distribution a finite element (FE) global model has been created called the Global Model of the Cryostat System (GMCS). A complete refurbishment of the GM CS has been done in the last 2 years to prepare the model for future applications. This involved a complete mesh update of the model, an improvement of many model features, an update of the applied operational loads and boundary conditions as well as the creation of automatic post processing procedures. Currently the GMCS is used to support several significant assembly and commissioning steps of W7-X that involve the cryostat system, e.g. the removal of temporary supports beneath the MB, transfer of the PV from temporary to the final supports and evacuation of the cryostat. In the upcoming months the model will be used for further support of the commissioning of W7-X which includes the first evacuation of the PV.

  8. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Directory of Open Access Journals (Sweden)

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  9. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely available...... for research purposes at http://www.bdporc.irta.es/estudis.jsp. The main feature of the program is to compute Monte Carlo estimates of marginal posterior distributions of parameters of interest. The program is quite flexible, allowing the user to fit a variety of linear models at the level of the mean...

  10. Studies of the Earth Energy Budget and Water Cycle Using Satellite Observations and Model Analyses

    Science.gov (United States)

    Campbell, G. G.; VonderHarr, T. H.; Randel, D. L.; Kidder, S. Q.

    1997-01-01

    During this research period we have utilized the ERBE data set in comparisons to surface properties and water vapor observations in the atmosphere. A relationship between cloudiness and surface temperature anomalies was found. This same relationship was found in a general circulation model, verifying the model. The attempt to construct a homogeneous time series from Nimbus 6, Nimbus 7 and ERBE data is not complete because we are still waiting for the ERBE reanalysis to be completed. It will be difficult to merge the Nimbus 6 data in because its observations occurred when the average weather was different than the other periods, so regression adjustments are not effective.

  11. Empirical analyses of a choice model that captures ordering among attribute values

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2017-01-01

    an alternative additionally because it has the highest price. In this paper, we specify a discrete choice model that takes into account the ordering of attribute values across alternatives. This model is used to investigate the effect of attribute value ordering in three case studies related to alternative-fuel...... vehicles, mode choice, and route choice. In our application to choices among alternative-fuel vehicles, we see that especially the price coefficient is sensitive to changes in ordering. The ordering effect is also found in the applications to mode and route choice data where both travel time and cost...

  12. Survival data analyses in ecotoxicology: critical effect concentrations, methods and models. What should we use?

    Science.gov (United States)

    Forfait-Dubuc, Carole; Charles, Sandrine; Billoir, Elise; Delignette-Muller, Marie Laure

    2012-05-01

    In ecotoxicology, critical effect concentrations are the most common indicators to quantitatively assess risks for species exposed to contaminants. Three types of critical effect concentrations are classically used: lowest/ no observed effect concentration (LOEC/NOEC), LC( x) (x% lethal concentration) and NEC (no effect concentration). In this article, for each of these three types of critical effect concentration, we compared methods or models used for their estimation and proposed one as the most appropriate. We then compared these critical effect concentrations to each other. For that, we used nine survival data sets corresponding to D. magna exposition to nine different contaminants, for which the time-course of the response was monitored. Our results showed that: (i) LOEC/NOEC values at day 21 were method-dependent, and that the Cochran-Armitage test with a step-down procedure appeared to be the most protective for the environment; (ii) all tested concentration-response models we compared gave close values of LC50 at day 21, nevertheless the Weibull model had the lowest global mean deviance; (iii) a simple threshold NEC-model both concentration and time dependent more completely described whole data (i.e. all timepoints) and enabled a precise estimation of the NEC. We then compared the three critical effect concentrations and argued that the use of the NEC might be a good option for environmental risk assessment.

  13. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  14. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases

    NARCIS (Netherlands)

    T.D. Hollingsworth (T. Déirdre); E.R. Adams (Emily R.); R.M. Anderson (Roy); K. Atkins (Katherine); S. Bartsch (Sarah); M-G. Basáñez (María-Gloria); M. Behrend (Matthew); D.J. Blok (David); L.A.C. Chapman (Lloyd A. C.); L.E. Coffeng (Luc); O. Courtenay (Orin); R.E. Crump (Ron E.); S.J. de Vlas (Sake); A.P. Dobson (Andrew); L. Dyson (Louise); H. Farkas (Hajnal); A.P. Galvani (Alison P.); M. Gambhir (Manoj); D. Gurarie (David); M.A. Irvine (Michael A.); S. Jervis (Sarah); M.J. Keeling (Matt J.); L. Kelly-Hope (Louise); C. King (Charles); B.Y. Lee (Bruce Y.); E.A. le Rutte (Epke); T.M. Lietman (Thomas M.); M. Ndeffo-Mbah (Martial); G.F. Medley (Graham F.); E. Michael (Edwin); A. Pandey (Abhishek); J.K. Peterson (Jennifer K.); A. Pinsent (Amy); T.C. Porco (Travis C.); J.H. Richardus (Jan Hendrik); L. Reimer (Lisa); K.S. Rock (Kat S.); B.K. Singh (Brajendra K.); W.A. Stolk (Wilma); S. Swaminathan (Subramanian); S.J. Torr (Steve J.); J. Townsend (Jeffrey); J. Truscott (James); M. Walker (Martin); A. Zoueva (Alexandra)

    2015-01-01

    textabstractQuantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an

  15. Analyses of gust fronts by means of limited area NWP model outputs

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek

    67-68, - (2003), s. 559-572 ISSN 0169-8095 R&D Projects: GA ČR GA205/00/1451 Institutional research plan: CEZ:AV0Z3042911 Keywords : gust front * limited area NWP model * output Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.012, year: 2003

  16. Using Latent Trait Measurement Models to Analyse Attitudinal Data: A Synthesis of Viewpoints.

    Science.gov (United States)

    Andrich, David

    A Rasch model for ordered response categories is derived and it is shown that it retains the key features of both the Thurstone and Likert approaches to studying attitude. Key features of the latter approaches are reviewed. Characteristics in common with the Thurstone approach are: statements are scaled with respect to their affective values;…

  17. Lightning NOx Statistics Derived by NASA Lightning Nitrogen Oxides Model (LNOM) Data Analyses

    Science.gov (United States)

    Koshak, William; Peterson, Harold

    2013-01-01

    What is the LNOM? The NASA Marshall Space Flight Center (MSFC) Lightning Nitrogen Oxides Model (LNOM) [Koshak et al., 2009, 2010, 2011; Koshak and Peterson 2011, 2013] analyzes VHF Lightning Mapping Array (LMA) and National Lightning Detection Network(TradeMark) (NLDN) data to estimate the lightning nitrogen oxides (LNOx) produced by individual flashes. Figure 1 provides an overview of LNOM functionality. Benefits of LNOM: (1) Does away with unrealistic "vertical stick" lightning channel models for estimating LNOx; (2) Uses ground-based VHF data that maps out the true channel in space and time to < 100 m accuracy; (3) Therefore, true channel segment height (ambient air density) is used to compute LNOx; (4) True channel length is used! (typically tens of kilometers since channel has many branches and "wiggles"); (5) Distinction between ground and cloud flashes are made; (6) For ground flashes, actual peak current from NLDN used to compute NOx from lightning return stroke; (7) NOx computed for several other lightning discharge processes (based on Cooray et al., 2009 theory): (a) Hot core of stepped leaders and dart leaders, (b) Corona sheath of stepped leader, (c) K-change, (d) Continuing Currents, and (e) M-components; and (8) LNOM statistics (see later) can be used to parameterize LNOx production for regional air quality models (like CMAQ), and for global chemical transport models (like GEOS-Chem).

  18. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  19. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    Science.gov (United States)

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  20. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    trained with geotechnical data obtained from an investigation of the study area. The quality of the modeling was improved further by the application of some controlling techniques involved in ANN. The observed >90% overall accuracy produced by the ANN technique in both cases is promising for future...

  1. Analysing outsourcing policies in an asset management context : A six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  2. Supporting custom quality models to analyse and compare open-source software

    NARCIS (Netherlands)

    D. Di Ruscio (Davide); D.S. Kolovos (Dimitrios); I. Korkontzelos (Ioannis); N. Matragkas (Nicholas); J.J. Vinju (Jurgen)

    2017-01-01

    textabstractThe analysis and comparison of open source software can be improved by means of quality models supporting the evaluation of the software systems being compared and the final decision about which of them has to be adopted. Since software quality can mean different things in different

  3. Ultrasonic vocalizations in Shank mouse models for autism spectrum disorders: detailed spectrographic analyses and developmental profiles.

    Science.gov (United States)

    Wöhr, Markus

    2014-06-01

    Autism spectrum disorders (ASD) are a class of neurodevelopmental disorders characterized by persistent deficits in social behavior and communication across multiple contexts, together with repetitive patterns of behavior, interests, or activities. The high concordance rate between monozygotic twins supports a strong genetic component. Among the most promising candidate genes for ASD is the SHANK gene family, including SHANK1, SHANK2 (ProSAP1), and SHANK3 (ProSAP2). SHANK genes are therefore important candidates for modeling ASD in mice and various genetic models were generated within the last few years. As the diagnostic criteria for ASD are purely behaviorally defined, the validity of mouse models for ASD strongly depends on their behavioral phenotype. Behavioral phenotyping is therefore a key component of the current translational approach and requires sensitive behavioral test paradigms with high relevance to each diagnostic symptom category. While behavioral phenotyping assays for social deficits and repetitive patterns of behavior, interests, or activities are well-established, the development of sensitive behavioral test paradigms to assess communication deficits in mice is a daunting challenge. Measuring ultrasonic vocalizations (USV) appears to be a promising strategy. In the first part of the review, an overview on the different types of mouse USV and their communicative functions will be provided. The second part is devoted to studies on the emission of USV in Shank mouse models for ASD. Evidence for communication deficits was obtained in Shank1, Shank2, and Shank3 genetic mouse models for ASD, often paralleled by behavioral phenotypes relevant to social deficits seen in ASD. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Animal models of bone cancer pain: systematic review and meta-analyses.

    Science.gov (United States)

    Currie, Gillian L; Delaney, Ada; Bennett, Michael I; Dickenson, Anthony H; Egan, Kieren J; Vesterinen, Hanna M; Sena, Emily S; Macleod, Malcolm R; Colvin, Lesley A; Fallon, Marie T

    2013-06-01

    Pain can significantly decrease the quality of life of patients with advanced cancer. Current treatment strategies often provide inadequate analgesia and unacceptable side effects. Animal models of bone cancer pain are used in the development of novel pharmacological approaches. Here we conducted a systematic review and meta-analysis of publications describing in vivo modelling of bone cancer pain in which behavioural, general health, macroscopic, histological, biochemical, or electrophysiological outcomes were reported and compared to appropriate controls. In all, 150 publications met our inclusion criteria, describing 38 different models of bone cancer pain. Reported methodological quality was low; only 31% of publications reported blinded assessment of outcome, and 11% reported random allocation to group. No publication reported a sample size calculation. Studies that reported measures to reduce bias reported smaller differences in behavioural outcomes between tumour-bearing and control animals, and studies that presented a statement regarding a conflict of interest reported larger differences in behavioural outcomes. Larger differences in behavioural outcomes were reported in female animals, when cancer cells were injected into either the tibia or femur, and when MatLyLu prostate or Lewis Lung cancer cells were used. Mechanical-evoked pain behaviours were most commonly reported; however, the largest difference was observed in spontaneous pain behaviours. In the spinal cord astrocyte activation and increased levels of Substance P receptor internalisation, c-Fos, dynorphin, tumor necrosis factor-α and interleukin-1β have been reported in bone cancer pain models, suggesting several potential therapeutic targets. However, the translational impact of animal models on clinical pain research could be enhanced by improving methodological quality. Copyright © 2013. Published by Elsevier B.V.

  5. Analyses of Spring Barley Evapotranspiration Rates Based on Gradient Measurements and Dual Crop Coefficient Model

    Directory of Open Access Journals (Sweden)

    Gabriela Pozníková

    2014-01-01

    Full Text Available The yield of agricultural crops depends on water availability to a great extent. According some projections, the likelihood of stress caused by drought is increasing in future climates expected for the Central Europe. Therefore, in order to manage agro-ecosystems properly, it is necessary to know water demand of particular crops as precisely as possible. Evapotranspiration (ET is the main part of water balance which takes the water from agro-ecosystems away. The ET consists of evaporation from the soil (E and transpiration (T through the stomata of plants. In this study, we investigated ET of spring barley 1-ha field (Domanínek, Czech Republic measured by Bowen ratio/energy balance method during growing period 2013 (May 8 to July 31. Special focus was dedicated to comparison of barley ET with the reference grass ETo calculated according FAO-56 model, i.e. the determination of barley crop coefficient (Kc. This crop coefficient was subsequently separated into soil evaporation (Ke and transpiration fraction (Kcb by adjusting soil and phenological parameters of dual crop coefficient model to minimize the root mean square error between measured and modelled ET. The resulting Kcb of barley was 0.98 during mid-growing period and 0.05 during initial and end periods. According to FAO-56, typical values are 1.10 and 0.15 for Kcb mid and Kcb end, respectively. Modelled and measured ET show satisfactory agreement with root mean square error equal 0.41 mm. Based on the sums of ET and E for the whole growing season of the spring barley, ET partitioning by FAO-56 dual crop coefficient model resulted in E/ET ratio being 0.24.

  6. A CAD Approach to Developing Mass Distribution and Composition Models for Spaceflight Radiation Risk Analyses

    Science.gov (United States)

    Zapp, E.; Shelfer, T.; Semones, E.; Johnson, A.; Weyland, M.; Golightly, M.; Smith, G.; Dardano, C.

    For roughly the past three decades, combinatorial geometries have been the predominant mode for the development of mass distribution models associated with the estimation of radiological risk for manned space flight. Examples of these are the MEVDP (Modified Elemental Volume Dose Program) vehicle representation of Liley and Hamilton, and the quadratic functional representation of the CAM/CAF (Computerized Anatomical Male/Female) human body models as modified by Billings and Yucker. These geometries, have the advantageous characteristics of being simple for a familiarized user to maintain, and because of the relative lack of any operating system or run-time library dependence, they are also easy to transfer from one computing platform to another. Unfortunately they are also limited in the amount of modeling detail possible, owing to the abstract geometric representation. In addition, combinatorial representations are also known to be error-prone in practice, since there is no convenient method for error identification (i.e. overlap, etc.), and extensive calculation and/or manual comparison may is often necessary to demonstrate that the geometry is adequately represented. We present an alternate approach linking materials -specific, CAD-based mass models directly to geometric analysis tools requiring no approximation with respect to materials , nor any meshing (i.e. tessellation) of the representative geometry. A new approach to ray tracing is presented which makes use of the fundamentals of the CAD representation to perform geometric analysis directly on the NURBS (Non-Uniform Rational BSpline) surfaces themselves. In this way we achieve a framework for- the rapid, precise development and analysis of materials-specific mass distribution models.

  7. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  8. Evaluation of a dentoalveolar model for testing mouthguards: stress and strain analyses.

    Science.gov (United States)

    Verissimo, Crisnicaw; Costa, Paulo Victor Moura; Santos-Filho, Paulo César Freitas; Fernandes-Neto, Alfredo Júlio; Tantbirojn, Daranee; Versluis, Antheunis; Soares, Carlos José

    2016-02-01

    Custom-fitted mouthguards are devices used to decrease the likelihood of dental trauma. The aim of this study was to develop an experimental bovine dentoalveolar model with periodontal ligament to evaluate mouthguard shock absorption, and impact strain and stress behavior. A pendulum impact device was developed to perform the impact tests with two different impact materials (steel ball and baseball). Five bovine jaws were selected with standard age and dimensions. Six-mm mouthguards were made for the impact tests. The jaws were fixed in a pendulum device and impacts were performed from 90, 60, and 45° angles, with and without mouthguard. Strain gauges were attached at the palatal surface of the impacted tooth. The strain and shock absorption of the mouthguards was calculated and data were analyzed with 3-way anova and Tukey's test (α = 0.05). Two-dimensional finite element models were created based on the cross-section of the bovine dentoalveolar model used in the experiment. A nonlinear dynamic impact analysis was performed to evaluate the strain and stress distributions. Without mouthguards, the increase in impact angulation significantly increased strains and stresses. Mouthguards reduced strain and stress values. Impact velocity, impact object (steel ball or baseball), and mouthguard presence affected the impact stresses and strains in a bovine dentoalveolar model. Experimental strain measurements and finite element models predicted similar behavior; therefore, both methodologies are suitable for evaluating the biomechanical performance of mouthguards. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Evaluation et analyse de sensibilite du modele CERES-Maize en conditions alsaciennes

    OpenAIRE

    Plantureux, Sylvain; Girardin, Philippe; Fouquet, D.; Chapot, Jean Yves

    1991-01-01

    CERES-Maize est un modèle de simulation de la croissance et du développement du maïs élaboré et validé aux Etats-Unis. Afin d’estimer les possibilités de transposition du modèle dans des conditions européennes, des simulations ont été réalisées pour 2 variétés de maïs (LG11 et DEA) cultivées en Alsace 2 années consécutives. Pour chaque variété, le modèle a été calibré sur 1 année et validé sur la suivante. L’analyse de sensibilité des paramètres liés à la variété et au sol montre que la r...

  10. Analysing the strength of friction stir welded dissimilar aluminium alloys using Sugeno Fuzzy model

    Science.gov (United States)

    Barath, V. R.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir welding (FSW) is a promising solid state joining technique for aluminium alloys. In this study, FSW trials were conducted on two dissimilar plates of aluminium alloy AA2024 and AA7075 by varying the tool rotation speed (TRS) and welding speed (WS). Tensile strength (TS) of the joints were measured and a Sugeno - Fuzzy model was developed to interconnect the FSW process parameters with the tensile strength. From the developed model, it was observed that the optimum heat generation at WS of 15 mm.min-1 and TRS of 1050 rpm resulted in dynamic recovery and dynamic recrystallization of the material. This refined the grains in the FSW zone and resulted in peak tensile strength among the tested specimens. Crest parabolic trend was observed in tensile strength with variation of TRS from 900 rpm to 1200 rpm and TTS from 10 mm.min-1 to 20 mm.min-1.

  11. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  12. Modelling and Analysing Access Control Policies in XACML 3.0

    DEFF Research Database (Denmark)

    Ramli, Carroline Dewi Puspa Kencana

    and verification of properties of XACML policies. Overall, we focus into two different area. The first part focuses on the access control language. More specifically our focus is on the understanding XACML 3.0. The second part focuses on how we use Logic Programming (LP) to model access control policies. We show...... semantics is described normatively using natural language. The use of English text in standardisation leads to the risk of misinterpretation and ambiguity. In order to avoid this drawback, we define an abstract syntax of XACML 3.0 and a formal XACML semantics. Second, we propose a logic-based XACML analysis...... framework using Answer Set Programming (ASP). With ASP we model an XACML PDP that loads XACML policies and evaluates XACML requests against these policies. The expressivity of ASP and the existence of efficient implementations of the answer set semantics provide the means for declarative specification...

  13. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  14. Modeling Freight Ocean Rail and Truck Transportation Flows to Support Policy Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wang, Hao [Cornell Univ., Ithaca, NY (United States); Nozick, Linda Karen [Cornell Univ., Ithaca, NY (United States); Xu, Ningxiong [Cornell Univ., Ithaca, NY (United States)

    2017-11-01

    Freight transportation represents about 9.5% of GDP, is responsible for about 8% of greenhouse gas emissions and supports the import and export of about 3.6 trillion in international trade; hence it is important that our national freight transportation system is designed and operated efficiently and embodies user fees and other policies that balance costs and environmental consequences. Hence, this paper develops a mathematical model to estimate international and domestic freight flows across ocean, rail and truck modes which can be used to study the impacts of changes in our infrastructure as well as the imposition of new user fees and changes in operating policies. This model is applied to two case studies: (1) a disruption of the maritime ports at Los Angeles/Long Beach similar to the impacts that would be felt in an earthquake; and (2) implementation of new user fees at the California ports.

  15. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  16. Analyses of the energy-dependent single separable potential models for the NN scattering

    International Nuclear Information System (INIS)

    Ahmad, S.S.; Beghi, L.

    1981-08-01

    Starting from a systematic study of the salient features regarding the quantum-mechanical two-particle scattering off an energy-dependent (ED) single separable potential and its connection with the rank-2 energy-independent (EI) separable potential in the T-(K-) amplitude formulation, the present status of the ED single separable potential models due to Tabakin (M1), Garcilazo (M2) and Ahmad (M3) has been discussed. It turned out that the incorporation of a self-consistent optimization procedure improves considerably the results of the 1 S 0 and 3 S 1 scattering phase shifts for the models (M2) and (M3) up to the CM wave number q=2.5 fm -1 , although the extrapolation of the results up to q=10 fm -1 reveals that the two models follow the typical behaviour of the well-known super-soft core potentials. It has been found that a variant of (M3) - i.e. (M4) involving one more parameter - gives the phase shifts results which are generally in excellent agreement with the data up to q=2.5 fm -1 and the extrapolation of the results for the 1 S 0 case in the higher wave number range not only follows the corresponding data qualitatively but also reflects a behaviour similar to the Reid soft core and Hamada-Johnston potentials together with a good agreement with the recent [4/3] Pade fits. A brief discussion regarding the features resulting from the variations in the ED parts of all the four models under consideration and their correlations with the inverse scattering theory methodology concludes the paper. (author)

  17. Analyses of Spring Barley Evapotranspiration Rates Based on Gradient Measurements and Dual Crop Coefficient Model

    Czech Academy of Sciences Publication Activity Database

    Pozníková, Gabriela; Fischer, Milan; Pohanková, Eva; Trnka, Miroslav

    2014-01-01

    Roč. 62, č. 5 (2014), s. 1079-1086 ISSN 1211-8516 R&D Projects: GA MŠk LH12037; GA MŠk(CZ) EE2.3.20.0248 Institutional support: RVO:67179843 Keywords : evapotranspiration * dual crop coefficient model * Bowen ratio/energy balance method * transpiration * soil evaporation * spring barley Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7)

  18. Modeling and analyses of postulated UF6 release accidents in gaseous diffusion plant

    International Nuclear Information System (INIS)

    Kim, S.H.; Taleyarkhan, R.P.; Keith, K.D.; Schmidt, R.W.; Carter, J.C.; Dyer, R.H.

    1995-10-01

    Computer models have been developed to simulate the transient behavior of aerosols and vapors as a result of a postulated accident involving the release of uranium hexafluoride (UF 6 ) into the process building of a gaseous diffusion plant. UF 6 undergoes an exothermic chemical reaction with moisture (H 2 O) in the air to form hydrogen fluoride (HF) and radioactive uranyl fluoride (UO 2 F 2 ). As part of a facility-wide safety evaluation, this study evaluated source terms consisting of UO 2 F 2 as well as HF during a postulated UF 6 release accident in a process building. In the postulated accident scenario, ∼7900 kg (17,500 lb) of hot UF 6 vapor is released over a 5 min period from the process piping into the atmosphere of a large process building. UO 2 F 2 mainly remains as airborne-solid particles (aerosols), and HF is in a vapor form. Some UO 2 F 2 aerosols are removed from the air flow due to gravitational settling. The HF and the remaining UO 2 F 2 are mixed with air and exhausted through the building ventilation system. The MELCOR computer code was selected for simulating aerosols and vapor transport in the process building. MELCOR model was first used to develop a single volume representation of a process building and its results were compared with those from past lumped parameter models specifically developed for studying UF 6 release accidents. Preliminary results indicate that MELCOR predicted results (using a lumped formulation) are comparable with those from previously developed models

  19. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 2

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-10-01

    Model 2 in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. Both the cylinder and the nozzle of model 2 had outside diameters of 10 in., giving a d 0 /D 0 ratio of 1.0, and both had outside diameter/thickness ratios of 100. Sixteen separate loading cases in which one end of the cylinder was rigidly held were analyzed. An internal pressure loading, three mutually perpendicular force components, and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. In addition to these 13 loadings, 3 additional loads were applied to the nozzle (in-plane bending moment, out-of-plane bending moment, and axial force) with the free end of the cylinder restrained. The experimental stress distributions for each of the 16 loadings were obtained using 152 three-gage strain rosettes located on the inner and outer surfaces. All the 16 loading cases were also analyzed theoretically using a finite-element shell analysis. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good general agreement, and it is felt that the analysis would be satisfactory for most engineering purposes. (auth)

  20. Consequence modeling for nuclear weapons probabilistic cost/benefit analyses of safety retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, T.F.; Peters, L.; Serduke, F.J.D.; Hall, C.; Stephens, D.R.

    1998-01-01

    The consequence models used in former studies of costs and benefits of enhanced safety retrofits are considered for (1) fuel fires; (2) non-nuclear detonations; and, (3) unintended nuclear detonations. Estimates of consequences were made using a representative accident location, i.e., an assumed mixed suburban-rural site. We have explicitly quantified land- use impacts and human-health effects (e.g. , prompt fatalities, prompt injuries, latent cancer fatalities, low- levels of radiation exposure, and clean-up areas). Uncertainty in the wind direction is quantified and used in a Monte Carlo calculation to estimate a range of results for a fuel fire with uncertain respirable amounts of released Pu. We define a nuclear source term and discuss damage levels of concern. Ranges of damages are estimated by quantifying health impacts and property damages. We discuss our dispersal and prompt effects models in some detail. The models used to loft the Pu and fission products and their particle sizes are emphasized.

  1. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  2. Drying of mint leaves in a solar dryer and under open sun: Modelling, performance analyses

    International Nuclear Information System (INIS)

    Akpinar, E. Kavak

    2010-01-01

    In this study was investigated the thin-layer drying characteristics in solar dryer with forced convection and under open sun with natural convection of mint leaves, and, performed energy analysis and exergy analysis of solar drying process of mint leaves. An indirect forced convection solar dryer consisting of a solar air collector and drying cabinet was used in the experiments. The drying data were fitted to ten the different mathematical models. Among the models, Wang and Singh model for the forced solar drying and the natural sun drying were found to best explain thin-layer drying behaviour of mint leaves. Using the first law of thermodynamics, the energy analysis throughout solar drying process was estimated. However, exergy analysis during solar drying process was determined by applying the second law of thermodynamics. Energy utilization ratio (EUR) values of drying cabinet varied in the ranges between 7.826% and 46.285%. The values of exergetic efficiency were found to be in the range of 34.760-87.717%. The values of improvement potential varied between 0 and 0.017 kJ s -1 . Energy utilization ratio and improvement potential decreased with increasing drying time and ambient temperature while exergetic efficiency increased.

  3. Bag-model analyses of proton-antiproton scattering and atomic bound states

    International Nuclear Information System (INIS)

    Alberg, M.A.; Freedman, R.A.; Henley, E.M.; Hwang, W.P.; Seckel, D.; Wilets, L.

    1983-01-01

    We study proton-antiproton (pp-bar ) scattering using the static real potential of Bryan and Phillips outside a cutoff radius rsub0 and two different shapes for the imaginary potential inside a radius R*. These forms, motivated by bag models, are a one-gluon-annihilation potential and a simple geometric-overlap form. In both cases there are three adjustable parameters: the effective bag radius R*, the effective strong coupling constant αsubssup*, and rsub0. There is also a choice for the form of the real potential inside the cutoff radius rsub0. Analysis of the pp-bar scattering data in the laboratory-momentum region 0.4--0.7 GeV/c yields an effective nucleon bag radius R* in the range 0.6--1.1 fm, with the best fit obtained for R* = 0.86 fm. Arguments are presented that the deduced value of R* is likely to be an upper bound on the isolated nucleon bag radius. The present results are consistent with the range of bag radii in current bag models. We have also used the resultant optical potential to calculate the shifts and widths of the sup3Ssub1 and sup1Ssub0 atomic bound states of the pp-bar system. For both states we find upward (repulsive) shifts and widths of about 1 keV. We find no evidence for narrow, strongly bound pp-bar states in our potential model

  4. Promoting Social Inclusion through Sport for Refugee-Background Youth in Australia: Analysing Different Participation Models

    Directory of Open Access Journals (Sweden)

    Karen Block

    2017-06-01

    Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.

  5. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  6. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  7. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  8. Using Rasch Modeling to Re-Evaluate Rapid Malaria Diagnosis Test Analyses

    Directory of Open Access Journals (Sweden)

    Dawit G. Ayele

    2014-06-01

    Full Text Available The objective of this study was to demonstrate the use of the Rasch model by assessing the appropriateness of the demographic, social-economic and geographic factors in providing a total score in malaria RDT in accordance with the model’s expectations. The baseline malaria indicator survey was conducted in Amhara, Oromiya and Southern Nation Nationalities and People (SNNP regions of Ethiopia by The Carter Center in 2007. The result shows high reliability and little disordering of thresholds with no evidence of differential item functioning.

  9. Fiskeoppdrett og verdsettelse : en analyse av resultatjustering og modeller for identifikasjon av slik aktivitet

    OpenAIRE

    Aaker, Harald

    2005-01-01

    Regnskapsinformasjon skal være relevant og pålitelig, men det vil alltid være skjønn forbundet med verdsettelsen.Usaklig skjønn omtales som resultatjustering (”Earnings management”) og regnskaps-manipulasjon. Det er store metodeproblemer innen ”earnings management” – forskningen, da den aktive tilpasningen i stor grad er skjult. I de senere år har ulike modeller for estimering av unormale tidsavgrensninger (”discretionary accruals”) dominert. Problemet er i å estimere de normale tidsavgrensni...

  10. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  11. ANALYSING POST-SEISMIC DEFORMATION OF IZMIT EARTHQUAKE WITH INSAR, GNSS AND COULOMB STRESS MODELLING

    Directory of Open Access Journals (Sweden)

    R. A. Barut

    2016-06-01

    Full Text Available On August 17th 1999, a Mw 7.4 earthquake struck the city of Izmit in the north-west of Turkey. This event was one of the most devastating earthquakes of the twentieth century. The epicentre of the Izmit earthquake was on the North Anatolian Fault (NAF which is one of the most active right-lateral strike-slip faults on earth. However, this earthquake offers an opportunity to study how strain is accommodated in an inter-segment region of a large strike slip fault. In order to determine the Izmit earthquake post-seismic effects, the authors modelled Coulomb stress changes of the aftershocks, as well as using the deformation measurement techniques of Interferometric Synthetic Aperture Radar (InSAR and Global Navigation Satellite System (GNSS. The authors have shown that InSAR and GNSS observations over a time period of three months after the earthquake combined with Coulomb Stress Change Modelling can explain the fault zone expansion, as well as the deformation of the northern region of the NAF. It was also found that there is a strong agreement between the InSAR and GNSS results for the post-seismic phases of investigation, with differences less than 2mm, and the standard deviation of the differences is less than 1mm.

  12. Subchannel and Computational Fluid Dynamic Analyses of a Model Pin Bundle

    Energy Technology Data Exchange (ETDEWEB)

    Gairola, A.; Arif, M.; Suh, K. Y. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    The current study showed that the simplistic approach of subchannel analysis code MATRA was not good in capturing the physical behavior of the coolant inside the rod bundle. With the incorporation of more detailed geometry of the grid spacer in the CFX code it was possible to approach the experimental values. However, it is vital to incorporate more advanced turbulence mixing models to more realistically simulate behavior of the liquid metal coolant inside the model pin bundle in parallel with the incorporation of the bottom and top grid structures. In the framework of the 11{sup th} international meeting of International Association for Hydraulic Research and Engineering (IAHR) working group on the advanced reactor thermal hydraulics a standard problem was conducted. The quintessence of the problem was to check on the hydraulics and heat transfer in a novel pin bundle with different pitch to rod diameter ratio and heat flux cooled by liquid metal. The standard problem stems from the field of nuclear safety research with the idea of validating and checking the performances of computer codes against the experimental results. Comprehensive checks between the two will succor in improving the dependability and exactness of the codes used for accident simulations.

  13. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  14. INTEGRATION OF 3D MODELS AND DIAGNOSTIC ANALYSES THROUGH A CONSERVATION-ORIENTED INFORMATION SYSTEM

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH artefacts has grown thanks to the progress of Information Technologies (IT tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called “Smart Culture”, which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  15. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST: northern extratropical response

    Directory of Open Access Journals (Sweden)

    K. Maynard

    2001-06-01

    Full Text Available The ECHAM 3.2 (T21, ECHAM 4 (T30 and LMD (version 6, grid-point resolution with 96 longitudes × 72 latitudes atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sea Surface Temperatures (SSTs as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD analysis of observed and simulated Sea Level Pressure (SLP and geopotential height at 500 hPa (Z500 seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the same response obtained with only one model. A robust skilful mode is found mainly in December-February (DJF, and in June-August (JJA. In DJF, this mode is close to the SST-forced pattern found by Straus and Shukla (2000 over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP and in a more N-S tripole on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500. There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogeneous correlations for SLP relative to the Z500 ones. The time evolution of the SST-forced mode is moderatly to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related

  16. Innovative three-dimensional neutronics analyses directly coupled with cad models of geometrically complex fusion systems

    International Nuclear Information System (INIS)

    Sawan, M.; Wilson, P.; El-Guebaly, L.; Henderson, D.; Sviatoslavsky, G.; Bohm, T.; Kiedrowski, B.; Ibrahim, A.; Smith, B.; Slaybaugh, R.; Tautges, T.

    2007-01-01

    Fusion systems are, in general, geometrically complex requiring detailed three-dimensional (3-D) nuclear analysis. This analysis is required to address tritium self-sufficiency, nuclear heating, radiation damage, shielding, and radiation streaming issues. To facilitate such calculations, we developed an innovative computational tool that is based on the continuous energy Monte Carlo code MCNP and permits the direct use of CAD-based solid models in the ray-tracing. This allows performing the neutronics calculations in a model that preserves the geometrical details without any simplification, eliminates possible human error in modeling the geometry for MCNP, and allows faster design iterations. In addition to improving the work flow for simulating complex 3- D geometries, it allows a richer representation of the geometry compared to the standard 2nd order polynomial representation. This newly developed tool has been successfully tested for a detailed 40 degree sector benchmark of the International Thermonuclear Experimental Reactor (ITER). The calculations included determining the poloidal variation of the neutron wall loading, flux and nuclear heating in the divertor components, nuclear heating in toroidal field coils, and radiation streaming in the mid-plane port. The tool has been applied to perform 3-D nuclear analysis for several fusion designs including the ARIES Compact Stellarator (ARIES-CS), the High Average Power Laser (HAPL) inertial fusion power plant, and ITER first wall/shield (FWS) modules. The ARIES-CS stellarator has a first wall shape and a plasma profile that varies toroidally within each field period compared to the uniform toroidal shape in tokamaks. Such variation cannot be modeled analytically in the standard MCNP code. The impact of the complex helical geometry and the non-uniform blanket and divertor on the overall tritium breeding ratio and total nuclear heating was determined. In addition, we calculated the neutron wall loading variation in

  17. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  18. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has

  19. Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving

    Science.gov (United States)

    Rakkapao, S.; Prasitpong, S.

    2018-03-01

    This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.

  20. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  1. Use of CFD modelling for analysing air parameters in auditorium halls

    Science.gov (United States)

    Cichowicz, Robert

    2017-11-01

    Modelling with the use of numerical methods is currently the most popular method of solving scientific as well as engineering problems. Thanks to the use of computer methods it is possible for example to comprehensively describe the conditions in a given room and to determine thermal comfort, which is a complex issue including subjective sensations of the persons in a given room. The article presents the results of measurements and numerical computing that enabled carrying out the assessment of environment parameters, taking into consideration microclimate, temperature comfort, speeds in the zone of human presence and dustiness in auditory halls. For this purpose measurements of temperature, relative humidity and dustiness were made with the use of a digital microclimate meter and a laser dust particles counter. Thanks to the above by using the application DesignBuilder numerical computing was performed and the obtained results enabled determining PMV comfort indicator in selected rooms.

  2. Statistical modelling of measurement errors in gas chromatographic analyses of blood alcohol content.

    Science.gov (United States)

    Moroni, Rossana; Blomstedt, Paul; Wilhelm, Lars; Reinikainen, Tapani; Sippola, Erkki; Corander, Jukka

    2010-10-10

    Headspace gas chromatographic measurements of ethanol content in blood specimens from suspect drunk drivers are routinely carried out in forensic laboratories. In the widely established standard statistical framework, measurement errors in such data are represented by Gaussian distributions for the population of blood specimens at any given level of ethanol content. It is known that the variance of measurement errors increases as a function of the level of ethanol content and the standard statistical approach addresses this issue by replacing the unknown population variances by estimates derived from large sample using a linear regression model. Appropriate statistical analysis of the systematic and random components in the measurement errors is necessary in order to guarantee legally sound security corrections reported to the police authority. Here we address this issue by developing a novel statistical approach that takes into account any potential non-linearity in the relationship between the level of ethanol content and the variability of measurement errors. Our method is based on standard non-parametric kernel techniques for density estimation using a large database of laboratory measurements for blood specimens. Furthermore, we address also the issue of systematic errors in the measurement process by a statistical model that incorporates the sign of the error term in the security correction calculations. Analysis of a set of certified reference materials (CRMs) blood samples demonstrates the importance of explicitly handling the direction of the systematic errors in establishing the statistical uncertainty about the true level of ethanol content. Use of our statistical framework to aid quality control in the laboratory is also discussed. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Comparative ultrastructural analyses of platelets and fibrin networks using the murine model of asthma.

    Science.gov (United States)

    Pretorius, E; Ekpo, O E; Smit, E

    2007-10-01

    The murine Balb/c asthma model has been used successfully for a number of in vivo immunological applications and for testing novel therapeutics, and it is a reliable, clinically relevant facsimile of the human disease. Here we investigate whether this model can be used to study other components of the human body, e.g. ultrastructure. In particular, we investigate the effect of the phytomedicine Euphorbia hirta (used to treat asthma), on the ultrastructure of fibrin as well as platelets, cellular structures that both play an important role in the coagulation process. Hydrocortisone is used as positive control. Ultrastructure of the fibrin networks and platelets of control mice were compared to mice that were asthmatic, treated with two concentrations of hydrocortisone and one concentration of the plant material. Results indicate control mice possess major, thick fibers and minor thin fibers as well as tight round platelet aggregates with typical pseudopodia formation. Minor fibers of asthmatic mice have a netlike appearance covering the major fibers, while the platelets seem to form loosely connected, granular aggregates. Both concentrations of hydrocortisone make the fibrin more fragile and that platelet morphology changes form a tight platelet aggregate to a more granular aggregate not closely fused to each other. We conclude that E. hirta does not impact on the fragility of the fibrin and that it prevents the minor fibers to form the dense netlike layer over the major fibers, as is seen in untreated asthmatic mice. This ultrastructural morphology might give us better insight into asthma and the possible new treatment regimes.

  4. On Deriving Requirements for the Surface Mass Balance forcing of a Greenland Ice Sheet Model using Uncertainty Analyses

    Science.gov (United States)

    Schlegel, N.; Larour, E. Y.; Box, J. E.

    2015-12-01

    During July of 2012, the percentage of the Greenland surface exposed to melt was the largest in recorded history. And, even though evidence of increased melt rates had been captured by remote sensing observations throughout the last decade, this particular event took the community by surprise. How Greenland ice flow will respond to such an event or to increased frequencies of extreme melt events in the future is unclear, as it requires detailed comprehension of Greenland surface climate and the ice sheet's sensitivity to associated uncertainties. With established uncertainty quantification (UQ) tools embedded within the Ice Sheet System Model (ISSM), we conduct decadal-scale forward modeling experiments to 1) quantify the spatial resolution needed to effectively force surface mass balance (SMB) in various regions of the ice sheet and 2) determine the dynamic response of Greenland outlet glaciers to variations in SMB. First, we perform sensitivity analyses to determine how perturbations in SMB affect model output; results allow us to investigate the locations where variations most significantly affect ice flow, and on what spatial scales. Next, we apply Monte-Carlo style sampling analyses to determine how errors in SMB propagate through the model as uncertainties in estimates of Greenland ice discharge and regional mass balance. This work is performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere Program.

  5. Analyses and optimization of Lee propagation model for LoRa 868 MHz network deployments in urban areas

    Directory of Open Access Journals (Sweden)

    Dobrilović Dalibor

    2017-01-01

    Full Text Available In the recent period, fast ICT expansion and rapid appearance of new technologies raised the importance of fast and accurate planning and deployment of emerging communication technologies, especially wireless ones. In this paper is analyzed possible usage of Lee propagation model for planning, design and management of networks based on LoRa 868MHz technology. LoRa is wireless technology which can be deployed in various Internet of Things and Smart City scenarios in urban areas. The analyses are based on comparison of field measurements with model calculations. Besides the analyses of Lee propagation model usability, the possible optimization of the model is discussed as well. The research results can be used for accurate design, planning and for preparation of high-performance wireless resource management of various Internet of Things and Smart City applications in urban areas based on LoRa or similar wireless technology. The equipment used for measurements is based on open-source hardware.

  6. Parametric analyses of DEMO Divertor using two dimensional transient thermal hydraulic modelling

    Science.gov (United States)

    Domalapally, Phani; Di Caro, Marco

    2017-11-01

    Among the options considered for cooling of the Plasma facing components of the DEMO reactor, water cooling is a conservative option because of its high heat removal capability. In this work a two-dimensional transient thermal hydraulic code is developed to support the design of the divertor for the projected DEMO reactor with water as a coolant. The mathematical model accounts for transient 2D heat conduction in the divertor section. Temperature-dependent properties are used for more accurate analysis. Correlations for single phase flow forced convection, partially developed subcooled nucleate boiling, fully developed subcooled nucleate boiling and film boiling are used to calculate the heat transfer coefficients on the channel side considering the swirl flow, wherein different correlations found in the literature are compared against each other. Correlation for the Critical Heat Flux is used to estimate its limit for a given flow conditions. This paper then investigates the results of the parametric analysis performed, whereby flow velocity, diameter of the coolant channel, thickness of the coolant pipe, thickness of the armor material, inlet temperature and operating pressure affect the behavior of the divertor under steady or transient heat fluxes. This code will help in understanding the basic parameterś effect on the behavior of the divertor, to achieve a better design from a thermal hydraulic point of view.

  7. Modelling and Analysing Deadlock in Flexible Manufacturing System using Timed Petri Net

    Directory of Open Access Journals (Sweden)

    Assem Hatem Taha

    2017-03-01

    Full Text Available Flexible manufacturing system (FMS has several advantages compared to conventional systems such as higher machine utilization, higher efficiency, less inventory, and less production time. On the other hand, FMS is expensive and complicated. One of the main problems that may happen is the deadlock. Deadlock is a case that happens when one operation or more are unable to complete their tasks because of waiting of resources that are used by other processes. This may occur due to inappropriate sharing of the resources or improper resource allocation logic which may lead to deadlock occurrence due to the complexity of assigning shared resources to different tasks in an efficient way. One of the most effective tools to model and detect the deadlocks is the petri net. In this research the Matlab software has been used to detect the deadlock in two parallel lines with one shared machines. The analysis shows that deadlock exists at transition with high utilization and place with high waiting time

  8. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  9. Modelling and optimization of combined cycle power plant based on exergoeconomic and environmental analyses

    International Nuclear Information System (INIS)

    Ganjehkaviri, A.; Mohd Jaafar, M.N.; Ahmadi, P.; Barzegaravval, H.

    2014-01-01

    This research paper presents a study on a comprehensive thermodynamic modelling of a combined cycle power plant (CCPP). The effects of economic strategies and design parameters on the plant optimization are also studied. Exergoeconomic analysis is conducted in order to determine the cost of electricity and cost of exergy destruction. In addition, a comprehensive optimization study is performed to determine the optimal design parameters of the power plant. Next, the effects of economic parameters variations on the sustainability, carbon dioxide emission and fuel consumption of the plant are investigated and are presented for a typical combined cycle power plant. Therefore, the changes in economic parameters caused the balance between cash flows and fix costs of the plant changes at optimum point. Moreover, economic strategies greatly limited the maximum reasonable carbon emission and fuel consumption reduction. The results showed that by using the optimum values, the exergy efficiency increases for about 6%, while CO 2 emission decreases by 5.63%. However, the variation in the cost was less than 1% due to the fact that a cost constraint was implemented. In addition, the sensitivity analysis for the optimization study was curtailed to be carried out; therefore, the optimization process and results to two important parameters are presented and discussed.

  10. Microarray and bioinformatic analyses suggest models for carbon metabolism in the autotroph Acidithiobacillus ferrooxidans

    Energy Technology Data Exchange (ETDEWEB)

    C. Appia-ayme; R. Quatrini; Y. Denis; F. Denizot; S. Silver; F. Roberto; F. Veloso; J. Valdes; J. P. Cardenas; M. Esparza; O. Orellana; E. Jedlicki; V. Bonnefoy; D. Holmes

    2006-09-01

    Acidithiobacillus ferrooxidans is a chemolithoautotrophic bacterium that uses iron or sulfur as an energy and electron source. Bioinformatic analysis was used to identify putative genes and potential metabolic pathways involved in CO2 fixation, 2P-glycolate detoxification, carboxysome formation and glycogen utilization in At. ferrooxidans. Microarray transcript profiling was carried out to compare the relative expression of the predicted genes of these pathways when the microorganism was grown in the presence of iron versus sulfur. Several gene expression patterns were confirmed by real-time PCR. Genes for each of the above predicted pathways were found to be organized into discrete clusters. Clusters exhibited differential gene expression depending on the presence of iron or sulfur in the medium. Concordance of gene expression within each cluster, suggested that they are operons Most notably, clusters of genes predicted to be involved in CO2 fixation, carboxysome formation, 2P-glycolate detoxification and glycogen biosynthesis were up-regulated in sulfur medium, whereas genes involved in glycogen utilization were preferentially expressed in iron medium. These results can be explained in terms of models of gene regulation that suggest how A. ferrooxidans can adjust its central carbon management to respond to changing environmental conditions.

  11. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  12. Fungal-Induced Deterioration of Mural Paintings: In Situ and Mock-Model Microscopy Analyses.

    Science.gov (United States)

    Unković, Nikola; Grbić, Milica Ljaljević; Stupar, Miloš; Savković, Željko; Jelikić, Aleksa; Stanojević, Dragan; Vukojević, Jelena

    2016-04-01

    Fungal deterioration of frescoes was studied in situ on a selected Serbian church, and on a laboratory model, utilizing standard and newly implemented microscopy techniques. Scanning electron microscopy (SEM) with energy-dispersive X-ray confirmed the limestone components of the plaster. Pigments used were identified as carbon black, green earth, iron oxide, ocher, and an ocher/cinnabar mixture. In situ microscopy, applied via a portable microscope ShuttlePix P-400R, proved very useful for detection of invisible micro-impairments and hidden, symptomless, microbial growth. SEM and optical microscopy established that observed deterioration symptoms, predominantly discoloration and pulverization of painted layers, were due to bacterial filaments and fungal hyphal penetration, and formation of a wide range of fungal structures (i.e., melanized hyphae, chlamydospores, microcolonial clusters, Cladosporium-like conidia, and Chaetomium perithecia and ascospores). The all year-round monitoring of spontaneous and induced fungal colonization of a "mock painting" in controlled laboratory conditions confirmed the decisive role of humidity level (70.18±6.91% RH) in efficient colonization of painted surfaces, as well as demonstrated increased bioreceptivity of painted surfaces to fungal colonization when plant-based adhesives (ilinocopie, murdent), compared with organic adhesives of animal origin (bone glue, egg white), are used for pigment sizing.

  13. Modeling and Analysing of Air Filter in Air Intake System in Automobile Engine

    Directory of Open Access Journals (Sweden)

    R. Manikantan

    2013-01-01

    Full Text Available As the legislations on the emission and performance of automobiles are being made more stringent, the expected performance of all the subsystems of an internal combustion engine is also becoming crucial. Nowadays the engines are downsized, and their power increased the demand on the air intake system that has increased phenomenally. Hence, an analysis was carried on a typical air filter fitted into the intake system to determine its flow characteristics. In the present investigation, a CAD model of an existing air filter was designed, and CFD analysis was done pertaining to various operating regimes of an internal combustion engine. The numerical results were validated with the experimental data. From the postprocessed result, we can see that there is a deficit in the design of the present filter, as the bottom portion of the filter is preventing the upward movement of air. Hence, the intake passage can be rearranged to provide an upward tangential motion, which can enhance the removal of larger dust and soot particles effectively by the inertial action of air alone.

  14. Analysing movements in investor’s risk aversion using the Heston volatility model

    Directory of Open Access Journals (Sweden)

    Alexie ALUPOAIEI

    2013-03-01

    Full Text Available In this paper we intend to identify and analyze, if it is the case, an “epidemiological” relationship between forecasts of professional investors and short-term developments in the EUR/RON exchange rate. Even that we don’t call a typical epidemiological model as those ones used in biology fields of research, we investigated the hypothesis according to which after the Lehman Brothers crash and implicit the generation of the current financial crisis, the forecasts of professional investors pose a significant explanatory power on the futures short-run movements of EUR/RON. How does it work this mechanism? Firstly, the professional forecasters account for the current macro, financial and political states, then they elaborate forecasts. Secondly, based on that forecasts they get positions in the Romanian exchange market for hedging and/or speculation purposes. But their positions incorporate in addition different degrees of uncertainty. In parallel, a part of their anticipations are disseminated to the public via media channels. Since some important movements are viewed within macro, financial or political fields, the positions of professsional investors from FX derivative market are activated. The current study represents a first step in that direction of analysis for Romanian case. For the above formulated objectives, in this paper different measures of EUR/RON rate volatility have been estimated and compared with implied volatilities. In a second timeframe we called the co-integration and dynamic correlation based tools in order to investigate the relationship between implied volatility and daily returns of EUR/RON exchange rate.

  15. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  16. Computational Modeling of Oxygen Transport in the Microcirculation: From an Experiment-Based Model to Theoretical Analyses

    OpenAIRE

    Lücker, Adrien

    2017-01-01

    Oxygen supply to cells by the cardiovascular system involves multiple physical and chemical processes that aim to satisfy fluctuating metabolic demand. Regulation mechanisms range from increased heart rate to minute adaptations in the microvasculature. The challenges and limitations of experimental studies in vivo make computational models an invaluable complement. In this thesis, oxygen transport from capillaries to tissue is investigated using a new numerical model that is tailored for vali...

  17. Pathophysiologic and transcriptomic analyses of viscerotropic yellow fever in a rhesus macaque model.

    Science.gov (United States)

    Engelmann, Flora; Josset, Laurence; Girke, Thomas; Park, Byung; Barron, Alex; Dewane, Jesse; Hammarlund, Erika; Lewis, Anne; Axthelm, Michael K; Slifka, Mark K; Messaoudi, Ilhem

    2014-01-01

    Infection with yellow fever virus (YFV), an explosively replicating flavivirus, results in viral hemorrhagic disease characterized by cardiovascular shock and multi-organ failure. Unvaccinated populations experience 20 to 50% fatality. Few studies have examined the pathophysiological changes that occur in humans during YFV infection due to the sporadic nature and remote locations of outbreaks. Rhesus macaques are highly susceptible to YFV infection, providing a robust animal model to investigate host-pathogen interactions. In this study, we characterized disease progression as well as alterations in immune system homeostasis, cytokine production and gene expression in rhesus macaques infected with the virulent YFV strain DakH1279 (YFV-DakH1279). Following infection, YFV-DakH1279 replicated to high titers resulting in viscerotropic disease with ∼72% mortality. Data presented in this manuscript demonstrate for the first time that lethal YFV infection results in profound lymphopenia that precedes the hallmark changes in liver enzymes and that although tissue damage was noted in liver, kidneys, and lymphoid tissues, viral antigen was only detected in the liver. These observations suggest that additional tissue damage could be due to indirect effects of viral replication. Indeed, circulating levels of several cytokines peaked shortly before euthanasia. Our study also includes the first description of YFV-DakH1279-induced changes in gene expression within peripheral blood mononuclear cells 3 days post-infection prior to any clinical signs. These data show that infection with wild type YFV-DakH1279 or live-attenuated vaccine strain YFV-17D, resulted in 765 and 46 differentially expressed genes (DEGs), respectively. DEGs detected after YFV-17D infection were mostly associated with innate immunity, whereas YFV-DakH1279 infection resulted in dysregulation of genes associated with the development of immune response, ion metabolism, and apoptosis. Therefore, WT-YFV infection

  18. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  19. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  20. Gamma-ray pulsar physics: gap-model populations and light-curve analyses in the Fermi era

    International Nuclear Information System (INIS)

    Pierbattista, M.

    2010-01-01

    This thesis research focusses on the study of the young and energetic isolated ordinary pulsar population detected by the Fermi gamma-ray space telescope. We compared the model expectations of four emission models and the LAT data. We found that all the models fail to reproduce the LAT detections, in particular the large number of high E objects observed. This inconsistency is not model dependent. A discrepancy between the radio-loud/radio-quiet objects ratio was also found between the observed and predicted samples. The L γ α E 0.5 relation is robustly confirmed by all the assumed models with particular agreement in the slot gap (SG) case. On luminosity bases, the intermediate altitude emission of the two pole caustic SG model is favoured. The beaming factor f Ω shows an E dependency that is slightly visible in the SG case. Estimates of the pulsar orientations have been obtained to explain the simultaneous gamma and radio light-curves. By analysing the solutions we found a relation between the observed energy cutoff and the width of the emission slot gap. This relation has been theoretically predicted. A possible magnetic obliquity α alignment with time is rejected -for all the models- on timescale of the order of 10 6 years. The light-curve morphology study shows that the outer magnetosphere gap emission (OGs) are favoured to explain the observed radio-gamma lag. The light curve moment studies (symmetry and sharpness) on the contrary favour a two pole caustic SG emission. All the model predictions suggest a different magnetic field layout with an hybrid two pole caustic and intermediate altitude emission to explain both the pulsar luminosity and light curve morphology. The low magnetosphere emission mechanism of the polar cap model, is systematically rejected by all the tests done. (author) [fr

  1. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  2. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  3. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  4. A regional tidal/subtidal circulation model of the southeastern Bering Sea: development, sensitivity analyses and hindcasting

    Science.gov (United States)

    Hermann, Albert J.; Stabeno, Phyllis J.; Haidvogel, Dale B.; Musgrave, David L.

    2002-12-01

    A regional eddy-resolving primitive equation circulation model was used to simulate circulation on the southeastern Bering Sea (SEBS) shelf and basin. This model resolves the dominant observed mean currents, eddies and meanders in the region, and simultaneously includes both tidal and subtidal dynamics. Circulation, temperature, and salinity fields for years 1995 and 1997 were hindcast, using daily wind and buoyancy flux estimates, and tidal forcing derived from a global model. This paper describes the development of the regional model, a comparison of model results with available Eulerian and Lagrangian data, a comparison of results between the two hindcast years, and a sensitivity analysis. Based on these hindcasts and sensitivity analyses, we suggest the following: (1) The Bering Slope Current is a primary source of large ( ˜100 km diameter) eddies in the SEBS basin. Smaller meanders are also formed along the 100 m isobath on the southeastern shelf, and along the 200-m isobath near the shelf break. (2) There is substantial interannual variability in the statistics of eddies within the basin, driven by variability in the strength of the ANSC. (3) The mean flow on the shelf is not strongly sensitive to changes in the imposed strength of the ANSC; rather, it is strongly sensitive to the local wind forcing. (4) Vertical mixing in the SEBS is strongly affected by both tidal and subtidal dynamics. Strongest mixing in the SEBS may in fact occur between the 100- and 400-m isobaths, near the Pribilof Islands, and in Unimak Pass.

  5. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses

    Science.gov (United States)

    Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir

    2013-01-01

    The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010

  6. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    Directory of Open Access Journals (Sweden)

    Ilona Naujokaitis-Lewis

    2016-07-01

    Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat

  7. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  8. A systematic review of approaches to modelling lower limb muscle forces during gait: Applicability to clinical gait analyses.

    Science.gov (United States)

    Trinler, Ursula; Hollands, Kristen; Jones, Richard; Baker, Richard

    2018-03-01

    Computational methods to estimate muscle forces during walking are becoming more common in biomechanical research but not yet in clinical gait analysis. This systematic review aims to identify the current state-of-the-art, examine the differences between approaches, and consider applicability of the current approaches in clinical gait analysis. A systematic database search identified studies including estimated muscle force profiles of the lower limb during healthy walking. These were rated for quality and the muscle force profiles digitised for comparison. From 13.449 identified studies, 22 were finally included which used four modelling approaches: static optimisation, enhanced static optimisation, forward dynamics and EMG-driven. These used a range of different musculoskeletal models, muscle-tendon characteristics and cost functions. There is visually broad agreement between and within approaches about when muscles are active throughout the gait cycle. There remain, considerable differences (CV 7%-151%, range of timing of peak forces in gait cycle 1%-31%) in patterns and magnitudes of force between and within modelling approaches. The main source of this variability is not clear. Different musculoskeletal models, experimental protocols, and modelling approaches will clearly have an effect as will the variability of joint kinetics between healthy individuals. Limited validation of modelling approaches, particularly at the level of individual participants, makes it difficult to conclude if any of the approaches give consistently better estimates than others. While muscle force modelling has clear potential to enhance clinical gait analyses future research is needed to improve validation, accuracy and feasibility of implementation in clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  10. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  11. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  12. Static and free-vibration analyses of dental prosthesis and atherosclerotic human artery by refined finite element models.

    Science.gov (United States)

    Carrera, E; Guarnera, D; Pagani, A

    2018-04-01

    Static and modal responses of representative biomechanical structures are investigated in this paper by employing higher-order theories of structures and finite element approximations. Refined models are implemented in the domain of the Carrera unified formulation (CUF), according to which low- to high-order kinematics can be postulated as arbitrary and, eventually, hierarchical expansions of the generalized displacement unknowns. By using CUF along with the principle of virtual work, the governing equations are expressed in terms of fundamental nuclei of finite element arrays. The fundamental nuclei are invariant of the theory approximation order and can be opportunely employed to implement variable kinematics theories of bio-structures. In this work, static and free-vibration analyses of an atherosclerotic plaque of a human artery and a dental prosthesis are discussed. The results from the proposed methodologies highlight a number of advantages of CUF models with respect to already established theories and commercial software tools. Namely, (i) CUF models can represent correctly the higher-order phenomena related to complex stress/strain field distributions and coupled mode shapes; (ii) bio-structures can be modeled in a component-wise sense by only employing the physical boundaries of the problem domain and without making any geometrical simplification. This latter aspect, in particular, can be currently accomplished only by using three-dimensional analysis, which may be computationally unbearable as complex bio-systems are considered.

  13. A multinomial logit model-Bayesian network hybrid approach for driver injury severity analyses in rear-end crashes.

    Science.gov (United States)

    Chen, Cong; Zhang, Guohui; Tarefder, Rafiqul; Ma, Jianming; Wei, Heng; Guan, Hongzhi

    2015-07-01

    Rear-end crash is one of the most common types of traffic crashes in the U.S. A good understanding of its characteristics and contributing factors is of practical importance. Previously, both multinomial Logit models and Bayesian network methods have been used in crash modeling and analysis, respectively, although each of them has its own application restrictions and limitations. In this study, a hybrid approach is developed to combine multinomial logit models and Bayesian network methods for comprehensively analyzing driver injury severities in rear-end crashes based on state-wide crash data collected in New Mexico from 2010 to 2011. A multinomial logit model is developed to investigate and identify significant contributing factors for rear-end crash driver injury severities classified into three categories: no injury, injury, and fatality. Then, the identified significant factors are utilized to establish a Bayesian network to explicitly formulate statistical associations between injury severity outcomes and explanatory attributes, including driver behavior, demographic features, vehicle factors, geometric and environmental characteristics, etc. The test results demonstrate that the proposed hybrid approach performs reasonably well. The Bayesian network reference analyses indicate that the factors including truck-involvement, inferior lighting conditions, windy weather conditions, the number of vehicles involved, etc. could significantly increase driver injury severities in rear-end crashes. The developed methodology and estimation results provide insights for developing effective countermeasures to reduce rear-end crash injury severities and improve traffic system safety performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  15. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  16. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    Directory of Open Access Journals (Sweden)

    Lisette J. A. Kogelman

    2014-07-01

    Full Text Available Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH and differentially wired (DW networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g. NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g. metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways

  17. Clustering structures of large proteins using multifractal analyses based on a 6-letter model and hydrophobicity scale of amino acids

    International Nuclear Information System (INIS)

    Yang Jianyi; Yu Zuguo; Anh, Vo

    2009-01-01

    The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.

  18. Nonlinear Modeling and Dynamic Simulation Using Bifurcation and Stability Analyses of Regenerative Chatter of Ball-End Milling Process

    Directory of Open Access Journals (Sweden)

    Jeehyun Jung

    2016-01-01

    Full Text Available A dynamic model for a ball-end milling process that includes the consideration of cutting force nonlinearities and regenerative chatter effects is presented. The nonlinear cutting force is approximated using a Fourier series and then expanded into a Taylor series up to the third order. A series of nonlinear analyses was performed to investigate the nonlinear dynamic behavior of a ball-end milling system, and the differences between the nonlinear analysis approach and its linear counterpart were examined. A bifurcation analysis of points near the critical equilibrium points was performed using the method of multiple scales (MMS and the method of harmonic balance (MHB to analyse the local chatter behaviors of the system. The bifurcation analysis was conducted at two subcritical Hopf bifurcation points. It was also found that a ball-end milling system with nonlinear cutting forces near its critical equilibrium points is conditionally stable. The analysis and simulation results were compared with experimental data reported in the literature, and the physical significance of the results is discussed.

  19. Civil engineering: EDF needs for concrete modelling; Genie civile: analyse des besoins EDF en modelisation du comportement des betons

    Energy Technology Data Exchange (ETDEWEB)

    Didry, O.; Gerard, B.; Bui, D. [Electricite de France (EDF), Direction des Etudes et Recherches, 92 - Clamart (France)

    1997-12-31

    Concrete structures which are encountered at EDF, like all civil engineering structures, age. In order to adapt the maintenance conditions of these structures, particularly to extend their service life, and also to prepare constructions of future structures, tools for predicting the behaviour of these structures in their environment should be available. For EDF the technical risks are high and consequently very appropriate R and D actions are required. In this context the Direction des Etudes et Recherches (DER) has developed a methodology for analysing concrete structure behaviour modelling. This approach has several aims: - making a distinction between the problems which refer to the existing models and those which require R and D; - displaying disciplinary links between different problems encountered on EDF structures (non-linear mechanical, chemical - hydraulic - mechanical coupling, etc); - listing of the existing tools and positioning the DER `Aster` finite element code among them. This document is a state of the art of scientific knowledge intended to shed light on the fields in which one should be involved when there is, on one part a strong requirement on the side of structure operators, and on the other one, the present tools do not allow this requirement to be satisfactorily met. The analysis has been done on 12 scientific subjects: 1) Hydration of concrete at early ages: exothermicity, hardening, autogenous shrinkage; 2) Drying and drying shrinkage; 3) Alkali-silica reaction and bulky stage formation; 4) Long term deterioration by leaching; 5) Ionic diffusion and associated attacks: the chlorides case; 6) Permeability / tightness of concrete; 7) Concretes -nonlinear behaviour and cracking (I): contribution of the plasticity models; 8) Concretes - nonlinear behaviour and cracking (II): contribution of the damage models; 9) Concretes - nonlinear behaviour and cracking (III): the contribution of the probabilistic analysis model; 10) Delayed behaviour of

  20. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  1. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  2. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    Science.gov (United States)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  3. Computational modeling and statistical analyses on individual contact rate and exposure to disease in complex and confined transportation hubs

    Science.gov (United States)

    Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.

    2018-01-01

    Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.

  4. Control volume analyses of glottal flow using a fully-coupled numerical fluid-structure interaction model

    Science.gov (United States)

    Yang, Jubiao; Krane, Michael; Zhang, Lucy

    2013-11-01

    Vocal fold vibrations and the glottal jet are successfully simulated using the modified Immersed Finite Element method (mIFEM), a fully coupled dynamics approach to model fluid-structure interactions. A self-sustained and steady vocal fold vibration is captured given a constant pressure input at the glottal entrance. The flow rates at different axial locations in the glottis are calculated, showing small variations among them due to the vocal fold motion and deformation. To further facilitate the understanding of the phonation process, two control volume analyses, specifically with Bernoulli's equation and Newton's 2nd law, are carried out for the glottal flow based on the simulation results. A generalized Bernoulli's equation is derived to interpret the correlations between the velocity and pressure temporally and spatially along the center line which is a streamline using a half-space model with symmetry boundary condition. A specialized Newton's 2nd law equation is developed and divided into terms to help understand the driving mechanism of the glottal flow.

  5. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  6. Time Headway Modelling of Motorcycle-Dominated Traffic to Analyse Traffic Safety Performance and Road Link Capacity of Single Carriageways

    Directory of Open Access Journals (Sweden)

    D. M. Priyantha Wedagama

    2017-04-01

    Full Text Available This study aims to develop time headway distribution models to analyse traffic safety performance and road link capacities for motorcycle-dominated traffic in Denpasar, Bali. Three road links selected as the case study are Jl. Hayam Wuruk, Jl.Hang Tuah, and Jl. Padma. Data analysis showed that between 55%-80% of motorists in Denpasar during morning and evening peak hours paid less attention to the safe distance with the vehicles in front. The study found that Lognormal distribution models are best to fit time headway data during morning peak hours while either Weibull (3P or Pearson III distributions is for evening peak hours. Road link capacities for mixed traffic predominantly motorcycles are apparently affected by the behaviour of motorists in keeping safe distance with the vehicles in front. Theoretical road link capacities for Jl. Hayam Wuruk, Jl. Hang Tuah and Jl. Padma are 3,186 vehicles/hour, 3,077 vehicles/hour and 1935 vehicles/hour respectively.

  7. Round-robin pretest analyses of a 1:6-scale reinforced concrete containment model subject to static internal pressurization

    International Nuclear Information System (INIS)

    Clauss, D.B.

    1987-05-01

    Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization was supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test

  8. On groundwater flow modelling in safety analyses of spent fuel disposal. A comparative study with emphasis on boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Jussila, P

    1999-11-01

    Modelling groundwater flow is an essential part of the safety assessment of spent fuel disposal because moving groundwater makes a physical connection between a geological repository and the biosphere. Some of the common approaches to model groundwater flow in bedrock are equivalent porous continuum (EC), stochastic continuum and various fracture network concepts. The actual flow system is complex and measuring data are limited. Multiple distinct approaches and models, alternative scenarios as well as calibration and sensitivity analyses are used to give confidence on the results of the calculations. The correctness and orders of magnitude of results of such complex research can be assessed by comparing them to the results of simplified and robust approaches. The first part of this study is a survey of the objects, contents and methods of the groundwater flow modelling performed in the safety assessment of the spent fuel disposal in Finland and Sweden. The most apparent difference of the Swedish studies compared to the Finnish ones is the approach of using more different models, which is enabled by the more resources available in Sweden. The results of more comprehensive approaches provided by international co-operation are very useful to give perspective to the results obtained in Finland. In the second part of this study, the influence of boundary conditions on the flow fields of a simple 2D model is examined. The assumptions and simplifications in this approach include e.g. the following: (1) the EC model is used, in which the 2-dimensional domain is considered a continuum of equivalent properties without fractures present, (2) the calculations are done for stationary fields, without sources or sinks present in the domain and with a constant density of the groundwater, (3) the repository is represented by an isotropic plate, the hydraulic conductivity of which is given fictitious values, (4) the hydraulic conductivity of rock is supposed to have an exponential

  9. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  10. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  11. Epidemiology of HPV 16 and cervical cancer in Finland and the potential impact of vaccination: mathematical modelling analyses.

    Directory of Open Access Journals (Sweden)

    Ruanne V Barnabas

    2006-05-01

    Full Text Available BACKGROUND: Candidate human papillomavirus (HPV vaccines have demonstrated almost 90%-100% efficacy in preventing persistent, type-specific HPV infection over 18 mo in clinical trials. If these vaccines go on to demonstrate prevention of precancerous lesions in phase III clinical trials, they will be licensed for public use in the near future. How these vaccines will be used in countries with national cervical cancer screening programmes is an important question. METHODS AND FINDINGS: We developed a transmission model of HPV 16 infection and progression to cervical cancer and calibrated it to Finnish HPV 16 seroprevalence over time. The model was used to estimate the transmission probability of the virus, to look at the effect of changes in patterns of sexual behaviour and smoking on age-specific trends in cancer incidence, and to explore the impact of HPV 16 vaccination. We estimated a high per-partnership transmission probability of HPV 16, of 0.6. The modelling analyses showed that changes in sexual behaviour and smoking accounted, in part, for the increase seen in cervical cancer incidence in 35- to 39-y-old women from 1990 to 1999. At both low (10% in opportunistic immunisation and high (90% in a national immunisation programme coverage of the adolescent population, vaccinating women and men had little benefit over vaccinating women alone. We estimate that vaccinating 90% of young women before sexual debut has the potential to decrease HPV type-specific (e.g., type 16 cervical cancer incidence by 91%. If older women are more likely to have persistent infections and progress to cancer, then vaccination with a duration of protection of less than 15 y could result in an older susceptible cohort and no decrease in cancer incidence. While vaccination has the potential to significantly reduce type-specific cancer incidence, its combination with screening further improves cancer prevention. CONCLUSIONS: HPV vaccination has the potential to

  12. Intercomparison and analyses of the climatology of the West African monsoon in the West African monsoon modeling and evaluation project (WAMME) first model intercomparison experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Yongkang; Sales, Fernando De [University of California, Los Angeles, CA (United States); Lau, W.K.M.; Schubert, Siegfried D.; Wu, Man-Li C. [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); Boone, Aaron [Centre National de Recherches Meteorologiques, Meteo-France Toulouse, Toulouse (France); Feng, Jinming [University of California, Los Angeles, CA (United States); Chinese Academy of Sciences, Institute of Atmospheric Physics, Beijing (China); Dirmeyer, Paul; Guo, Zhichang [Center for Ocean-Land-Atmosphere Interactions, Calverton, MD (United States); Kim, Kyu-Myong [University of Maryland Baltimore County, Baltimore, MD (United States); Kitoh, Akio [Meteorological Research Institute, Tsukuba (Japan); Kumar, Vadlamani [National Center for Environmental Prediction, Camp Springs, MD (United States); Wyle Information Systems, Gaithersburg, MD (United States); Poccard-Leclercq, Isabelle [Universite de Bourgogne, Centre de Recherches de Climatologie UMR5210 CNRS, Dijon (France); Mahowald, Natalie [Cornell University, Ithaca, NY (United States); Moufouma-Okia, Wilfran; Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom); Pegion, Phillip [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); National Center for Environmental Prediction, Camp Springs, MD (United States); Schemm, Jae; Thiaw, Wassila M. [National Center for Environmental Prediction, Camp Springs, MD (United States); Sealy, Andrea [The Caribbean Institute for Meteorology and Hydrology, St. James (Barbados); Vintzileos, Augustin [National Center for Environmental Prediction, Camp Springs, MD (United States); Science Applications International Corporation, Camp Springs, MD (United States); Williams, Steven F. [National Center for Atmospheric Research, Boulder, CO (United States)

    2010-07-15

    This paper briefly presents the West African monsoon (WAM) modeling and evaluation project (WAMME) and evaluates WAMME general circulation models' (GCM) performances in simulating variability of WAM precipitation, surface temperature, and major circulation features at seasonal and intraseasonal scales in the first WAMME experiment. The analyses indicate that models with specified sea surface temperature generally have reasonable simulations of the pattern of spatial distribution of WAM seasonal mean precipitation and surface temperature as well as the averaged zonal wind in latitude-height cross-section and low level circulation. But there are large differences among models in simulating spatial correlation, intensity, and variance of precipitation compared with observations. Furthermore, the majority of models fail to produce proper intensities of the African Easterly Jet (AEJ) and the tropical easterly jet. AMMA Land Surface Model Intercomparison Project (ALMIP) data are used to analyze the association between simulated surface processes and the WAM and to investigate the WAM mechanism. It has been identified that the spatial distributions of surface sensible heat flux, surface temperature, and moisture convergence are closely associated with the simulated spatial distribution of precipitation; while surface latent heat flux is closely associated with the AEJ and contributes to divergence in AEJ simulation. Common empirical orthogonal functions (CEOF) analysis is applied to characterize the WAM precipitation evolution and has identified a major WAM precipitation mode and two temperature modes (Sahara mode and Sahel mode). Results indicate that the WAMME models produce reasonable temporal evolutions of major CEOF modes but have deficiencies/uncertainties in producing variances explained by major modes. Furthermore, the CEOF analysis shows that WAM precipitation evolution is closely related to the enhanced Sahara mode and the weakened Sahel mode, supporting

  13. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    Directory of Open Access Journals (Sweden)

    Douglas Blackiston

    2010-12-01

    Full Text Available A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays. The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science.

  14. Transcriptomics and proteomics analyses of the PACAP38 influenced ischemic brain in permanent middle cerebral artery occlusion model mice

    Directory of Open Access Journals (Sweden)

    Hori Motohide

    2012-11-01

    Full Text Available Abstract Introduction The neuropeptide pituitary adenylate cyclase-activating polypeptide (PACAP is considered to be a potential therapeutic agent for prevention of cerebral ischemia. Ischemia is a most common cause of death after heart attack and cancer causing major negative social and economic consequences. This study was designed to investigate the effect of PACAP38 injection intracerebroventrically in a mouse model of permanent middle cerebral artery occlusion (PMCAO along with corresponding SHAM control that used 0.9% saline injection. Methods Ischemic and non-ischemic brain tissues were sampled at 6 and 24 hours post-treatment. Following behavioral analyses to confirm whether the ischemia has occurred, we investigated the genome-wide changes in gene and protein expression using DNA microarray chip (4x44K, Agilent and two-dimensional gel electrophoresis (2-DGE coupled with matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI-TOF-MS, respectively. Western blotting and immunofluorescent staining were also used to further examine the identified protein factor. Results Our results revealed numerous changes in the transcriptome of ischemic hemisphere (ipsilateral treated with PACAP38 compared to the saline-injected SHAM control hemisphere (contralateral. Previously known (such as the interleukin family and novel (Gabra6, Crtam genes were identified under PACAP influence. In parallel, 2-DGE analysis revealed a highly expressed protein spot in the ischemic hemisphere that was identified as dihydropyrimidinase-related protein 2 (DPYL2. The DPYL2, also known as Crmp2, is a marker for the axonal growth and nerve development. Interestingly, PACAP treatment slightly increased its abundance (by 2-DGE and immunostaining at 6 h but not at 24 h in the ischemic hemisphere, suggesting PACAP activates neuronal defense mechanism early on. Conclusions This study provides a detailed inventory of PACAP influenced gene expressions

  15. Experimental model to evaluate in vivo and in vitro cartilage MR imaging by means of histological analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bittersohl, B. [Department of Orthopedic Surgery, University of Berne, Inselspital, 3010 Bern (Switzerland); Mamisch, T.C. [Department of Orthopedic Surgery, University of Berne, Inselspital, 3010 Bern (Switzerland)], E-mail: mamisch@bwh.harvard.edu; Welsch, G.H. [Department of Trauma Surgery, University of Erlangen (Germany); Stratmann, J.; Forst, R. [Department of Orthopedic Surgery, University of Erlangen (Germany); Swoboda, B. [Department of Orthopedic Rheumatology, University of Erlangen (Germany); Bautz, W. [Department of Diagnostic Radiology, University of Erlangen (Germany); Rechenberg, B. von [Musculoskeletal Research Unit (MSRU), University of Zurich (Switzerland); Cavallaro, A. [Department of Diagnostic Radiology, University of Erlangen (Germany)

    2009-06-15

    Objectives: Implementation of an experimental model to compare cartilage MR imaging by means of histological analyses. Material and methods: MRI was obtained from 4 patients expecting total knee replacement at 1.5 and/or 3 T prior surgery. The timeframe between pre-op MRI and knee replacement was within two days. Resected cartilage-bone samples were tagged with Ethi-pins to reproduce the histological cutting course. Pre-operative scanning at 1.5 T included following parameters for fast low angle shot (FLASH: TR/TE/FA = 33 ms/6 ms/30 deg., BW = 110 kHz, 120 mm x 120 mm FOV, 256 x 256 matrix, 0.65 mm slice-thickness) and double echo steady state (DESS: TR/TE/FA = 23.7 ms/6.9 ms/40 deg., BW = 130 kHz, 120 x 120 mm FOV, 256 x 256 matrix, 0.65 mm slice-thickness). At 3 T, scan parameters were: FLASH (TR/TE/FA = 12.2 ms/5.1 ms/10 deg., BW = 130 kHz, 170 x 170 mm FOV, 320 x 320, 0.5 mm slice-thickness) and DESS (TR/TE/FA = 15.6 ms/4.5 ms/25 deg., BW = 200 kHz, 135 mm x 150 mm FOV, 288 x 320 matrix, 0.5 mm slice-thickness). Imaging of the specimens was done the same day at 1.5 T. MRI (Noyes) and histological (Mankin) score scales were correlated using the paired t-test. Sensitivity and specificity for the detection of different grades of cartilage degeneration were assessed. Inter-reader and intra-reader reliability was determined using Kappa analysis. Results: Low correlation (sensitivity, specificity) was found for both sequences in normal to mild Mankin grades. Only moderate to severe changes were diagnosed with higher significance and specificity. The use of higher field-strengths was advantageous for both protocols with sensitivity values ranging from 13.6% to 93.3% (FLASH) and 20.5% to 96.2% (DESS). Kappa values ranged from 0.488 to 0.944. Conclusions: Correlating MR images with continuous histological slices was feasible by using three-dimensional imaging, multi-planar-reformat and marker pins. The capability of diagnosing early cartilage changes with high accuracy

  16. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  17. Generic Linking of Finite Element Models for non-linear static and global dynamic analyses of aircraft structures

    NARCIS (Netherlands)

    de Wit, A.J.; Akcay-Perdahcioglu, Didem; van den Brink, W.M.; de Boer, Andries

    2012-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different

  18. Generic linking of finite element models for non-linear static and global dynamic analyses for aircraft structures

    NARCIS (Netherlands)

    de Wit, A.J.; Akcay-Perdahcioglu, Didem; van den Brink, W.M.; de Boer, Andries; Rolfes, R.; Jansen, E.L.

    2011-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different

  19. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  20. Overview of fuel behaviour and core degradation, based on modelling analyses. Overview of fuel behaviour and core degradation, on the basis of modelling results

    International Nuclear Information System (INIS)

    Massara, Simone

    2013-01-01

    Since the very first hours after the accident at Fukushima-Daiichi, numerical simulations by means of severe accident codes have been carried out, aiming at highlighting the key physical phenomena allowing a correct understanding of the sequence of events, and - on a long enough timeline - improving models and methods, in order to reduce the discrepancy between calculated and measured data. A last long-term objective is to support the future decommissioning phase. The presentation summarises some of the available elements on the role of the fuel/cladding-water interaction, which became available only through modelling because of the absence of measured data directly related to the cladding-steam interaction. This presentation also aims at drawing some conclusions on the status of the modelling capabilities of current tools, particularly for the purpose of the foreseen application to ATF fuels: - analyses with MELCOR, MAAP, THALES2 and RELAP5 are presented; - input data are taken from BWR Mark-I Fukushima-Daiichi Units 1, 2 and 3, completed with operational data published by TEPCO. In the case of missing or incomplete data or hypotheses, these are adjusted to reduce the calculation/measurement discrepancy. The behaviour of the accident is well understood on a qualitative level (major trends on RPV pressure and water level, dry-wet and PCV pressure are well represented), allowing a certain level of confidence in the results of the analysis of the zirconium-steam reaction - which is accessible only through numerical simulations. These show an extremely fast sequence of events (here for Unit 1): - the top of fuel is uncovered in 3 hours (after the tsunami); - the steam line breaks at 6.5 hours. Vessel dries at 10 hours, with a heat-up rate in a first moment driven by the decay heat only (∼7 K/min) and afterwards by the chemical heat from Zr-oxidation (over 30 K/min), associated with massive hydrogen production. It appears that the level of uncertainty increases with

  1. Analysing conflicts around small-scale gold mining in the Amazon : The contribution of a multi-temporal model

    NARCIS (Netherlands)

    Salman, Ton; de Theije, Marjo

    Conflict is small-scale gold mining's middle name. In only a very few situations do mining operations take place without some sort of conflict accompanying the activity, and often various conflicting stakeholders struggle for their interests simultaneously. Analyses of such conflicts are typically

  2. Sensitivity of the direct stop pair production analyses in phenomenological MSSM simplified models with the ATLAS detectors

    CERN Document Server

    Snyder, Ian Michael; The ATLAS collaboration

    2018-01-01

    The sensitivity of the searches for the direct pair production of stops often has been evaluated in simple SUSY scenarios, where only a limited set of supersymmetric particles take part to the stop decay. In this talk, the interpretations of the analyses requiring zero, one or two leptons in the final states to simple but well motivated MSSM scenarios will be discussed.

  3. State of the art in establishing computed models of adsorption processes to serve as a basis of radionuclide migration assessment for safety analyses

    International Nuclear Information System (INIS)

    Koss, V.

    1991-01-01

    An important point in safety analysis of an underground repository is adsorption of radionuclides in the overlying cover. Adsorption may be judged according to experimental results or to model calculations. Because of the reliability aspired in safety analyses, it is necessary to strengthen experimental results by theoretical calculations. At the time, there is no single thermodynamic model of adsorption to be agreed on. Therefore, this work reviews existing equilibrium models of adsorption. Limitations of the K d -concept and of adsorption-isotherms according to Freundlich and Langmuir are mentioned. The surface ionisation and complexation edl model is explained in full as is the criticism of this model. The application is stressed of simple surface complexation models to adsorption experiments in natural systems as is experimental and modelling work according to systems from Gorleben. Hints are given how to deal with modelling of adsorption related to Gorleben systems in the future. (orig.) [de

  4. Structured modelling and nonlinear analysis of PEM fuel cells; Strukturierte Modellierung und nichtlineare Analyse von PEM-Brennstoffzellen

    Energy Technology Data Exchange (ETDEWEB)

    Hanke-Rauschenbach, R.

    2007-10-26

    In the first part of this work a model structuring concept for electrochemical systems is presented. The application of such a concept for the structuring of a process model allows it to combine different fuel cell models to form a whole model family, regardless of their level of detail. Beyond this the concept offers the opportunity to flexibly exchange model entities on different model levels. The second part of the work deals with the nonlinear behaviour of PEM fuel cells. With the help of a simple, spatially lumped and isothermal model, bistable current-voltage characteristics of PEM fuel cells operated with low humidified feed gases are predicted and discussed in detail. The cell is found to exhibit current-voltage curves with pronounced local extrema in a parameter range that is of practical interest when operated at constant feed gas flow rates. (orig.)

  5. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  6. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  7. From global economic modelling to household level analyses of food security and sustainability: how big is the gap and can we bridge it?

    NARCIS (Netherlands)

    Wijk, van M.T.

    2014-01-01

    Policy and decision makers have to make difficult choices to improve the food security of local people against the background of drastic global and local changes. Ex-ante impact assessment using integrated models can help them with these decisions. This review analyses the state of affairs of the

  8. Understanding N2O formation mechanisms through sensitivity analyses using a plant-wide benchmark simulation model

    DEFF Research Database (Denmark)

    Boiocchi, Riccardo; Gernaey, Krist; Sin, Gürkan

    2017-01-01

    In the present work, sensitivity analyses are performed on a plant-wide model incorporating the typical treatment unit of a full-scale wastewater treatment plant and N2O production and emission dynamics. The influence of operating temperatureis investigated. The results are exploited to identify...

  9. Global sensitivity analysis of thermomechanical models in modelling of welding; Analyse de sensibilite globale de modeles thermomecanique de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2008-07-01

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  10. Global sensitivity analysis of thermo-mechanical models in numerical weld modelling; Analyse de sensibilite globale de modeles thermomecaniques de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2007-10-15

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  11. Analysis and modelling of the energy requirements of batch processes; Analyse und Modellierung des Energiebedarfes in Batch-Prozessen

    Energy Technology Data Exchange (ETDEWEB)

    Bieler, P.S.

    2002-07-01

    This intermediate report for the Swiss Federal Office of Energy (SFOE) presents the results of a project aiming to model the energy consumption of multi-product, multi-purpose batch production plants. The utilities investigated were electricity, brine and steam. Both top-down and bottom-up approaches are described, whereby top-down was used for the buildings where the batch process apparatus was installed. Modelling showed that for batch-plants at the building level, the product mix can be too variable and the diversity of products and processes too great for simple modelling. Further results obtained by comparing six different production plants that could be modelled are discussed. The several models developed are described and their wider applicability is discussed. Also, the results of comparisons made between modelled and actual values are presented. Recommendations for further work are made.

  12. A growth curve model with fractional polynomials for analysing incomplete time-course data in microarray gene expression studies

    DEFF Research Database (Denmark)

    Tan, Qihua; Thomassen, Mads; Hjelmborg, Jacob V B

    2011-01-01

    -course pattern in a gene by gene manner. We introduce a growth curve model with fractional polynomials to automatically capture the various time-dependent expression patterns and meanwhile efficiently handle missing values due to incomplete observations. For each gene, our procedure compares the performances...... among fractional polynomial models with power terms from a set of fixed values that offer a wide range of curve shapes and suggests a best fitting model. After a limited simulation study, the model has been applied to our human in vivo irritated epidermis data with missing observations to investigate...... time-dependent transcriptional responses to a chemical irritant. Our method was able to identify the various nonlinear time-course expression trajectories. The integration of growth curves with fractional polynomials provides a flexible way to model different time-course patterns together with model...

  13. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  14. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  15. QTL analyses on genotype-specific component traits in a crop simulation model for capsicum annuum L.

    NARCIS (Netherlands)

    Wubs, A.M.; Heuvelink, E.; Dieleman, J.A.; Magan, J.J.; Palloix, A.; Eeuwijk, van F.A.

    2012-01-01

    Abstract: QTL for a complex trait like yield tend to be unstable across environments and show QTL by environment interaction. Direct improvement of complex traits by selecting on QTL is therefore difficult. For improvement of complex traits, crop growth models can be useful, as such models can

  16. A CFBPN Artificial Neural Network Model for Educational Qualitative Data Analyses: Example of Students' Attitudes Based on Kellerts' Typologies

    Science.gov (United States)

    Yorek, Nurettin; Ugulu, Ilker

    2015-01-01

    In this study, artificial neural networks are suggested as a model that can be "trained" to yield qualitative results out of a huge amount of categorical data. It can be said that this is a new approach applied in educational qualitative data analysis. In this direction, a cascade-forward back-propagation neural network (CFBPN) model was…

  17. Analyses of freshwater stress with a couple ground and surface water model in the Pra Basin, Ghana

    Science.gov (United States)

    Owusu, George; Owusu, Alex B.; Amankwaa, Ebenezer Forkuo; Eshun, Fatima

    2017-03-01

    The optimal management of water resources requires that the collected hydrogeological, meteorological, and spatial data be simulated and analyzed with appropriate models. In this study, a catchment-scale distributed hydrological modeling approach is applied to simulate water stress for the years 2000 and 2050 in a data scarce Pra Basin, Ghana. The model is divided into three parts: The first computes surface and groundwater availability as well as shallow and deep groundwater residence times by using POLFLOW model; the second extends the POLFLOW model with water demand (Domestic, Industrial and Agricultural) model; and the third part involves modeling water stress indices—from the ratio of water demand to water availability—for every part of the basin. On water availability, the model estimated long-term annual Pra river discharge at the outflow point of the basin, Deboase, to be 198 m3/s as against long-term average measurement of 197 m3/s. Moreover, the relationship between simulated discharge and measured discharge at 9 substations in the basin scored Nash-Sutcliffe model efficiency coefficient of 0.98, which indicates that the model estimation is in agreement with the long-term measured discharge. The estimated total water demand significantly increases from 959,049,096 m3/year in 2000 to 3,749,559,019 m3/year in 2050 ( p < 0.05). The number of districts experiencing water stress significantly increases ( p = 0.00044) from 8 in 2000 to 21 out of 35 by the year 2050. This study will among other things help the stakeholders in water resources management to identify and manage water stress areas in the basin.

  18. A Review On Accuracy and Uncertainty of Spatial Data and Analyses with special reference to Urban and Hydrological Modelling

    Science.gov (United States)

    Devendran, A. A.; Lakshmanan, G.

    2014-11-01

    Data quality for GIS processing and analysis is becoming an increased concern due to the accelerated application of GIS technology for problem solving and decision making roles. Uncertainty in the geographic representation of the real world arises as these representations are incomplete. Identification of the sources of these uncertainties and the ways in which they operate in GIS based representations become crucial in any spatial data representation and geospatial analysis applied to any field of application. This paper reviews the articles on the various components of spatial data quality and various uncertainties inherent in them and special focus is paid to two fields of application such as Urban Simulation and Hydrological Modelling. Urban growth is a complicated process involving the spatio-temporal changes of all socio-economic and physical components at different scales. Cellular Automata (CA) model is one of the simulation models, which randomly selects potential cells for urbanisation and the transition rules evaluate the properties of the cell and its neighbour. Uncertainty arising from CA modelling is assessed mainly using sensitivity analysis including Monte Carlo simulation method. Likewise, the importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures. The Soil and Water Assessment Tool (SWAT) is a continuous time watershed model to evaluate various impacts of land use management and climate on hydrology and water quality. Hydrological model uncertainties using SWAT model are dealt primarily by Generalized Likelihood Uncertainty Estimation (GLUE) method.

  19. Analysing Possible Applications for Available Mathematical Models of Tracked Vehicle Movement Over the Rough Terrain to Examine Tracked Chain Dynamic Processes

    Directory of Open Access Journals (Sweden)

    M. E. Lupyan

    2014-01-01

    Full Text Available The article offered for consideration provides a survey of methods to study a tracked vehicle movement over unpaved grounds and obstacles using various software systems. The relevant issue is to optimize chassis elements of a caterpillar at the design stage. The challenges, engineers face using different methods to study the tracked vehicle elements, are given. Advantages of using simulation to study a state of the various components of the loaded chassis are described. Beside, an important and relevant issue is brought up i.e. modeling a vehicle movement in real time.While writing an article, different modeling methods for an interaction between a tracked vehicle chassis and an underlying subgrade used both in domestic and in foreign practice have been analysed. The applied analytical assumptions in creating these models and their basic elements are described. The way to specify an interaction between the track and road wheels of a caterpillar, crawler belt specification, and interaction between its elements have been analysed in detail as well. Special attention was also paid to the various ways of specifying the subgrade both in planar models and in models enabling us to study all chassis elements of a caterpillar as a whole.In addition to the classical simulation of tracked vehicle movement used to analyse Ride qualities of tracked vehicle and loaded state of various chassis elements, is offered a model used to simulate a movement coil in real time.The article presents advantages and disadvantages of different models of movement in terms of engineering analysis of caterpillar elements. A task to develop a simulation model of caterpillar movement is set. Requirements for a model in case of its use in engineering analysis of chassis elements of a caterpillar are defined. A problem of a lack of the single technique to conduct engineering analysis of tracked vehicle chassis is noted. 

  20. Combining hydraulic model, hydrogeomorphological observations and chemical analyses of surface waters to improve knowledge on karst flash floods genesis

    Directory of Open Access Journals (Sweden)

    F. Raynaud

    2015-06-01

    Full Text Available During a flood event over a karst watershed, the connections between surface and ground waters appear to be complex ones. The karst may attenuate surface floods by absorbing water or contribute to the surface flood by direct contribution of karst waters in the rivers (perennial and overflowing springs and by diffuse resurgence along the hillslopes. If it is possible to monitor each known outlet of a karst system, the diffuse contribution is yet difficult to assess. Furthermore, all these connections vary over time according to several factors such as the water content of the soil and underground, the rainfall characteristics, the runoff pathways. Therefore, the contribution of each compartment is generally difficult to assess, and flood dynamics are not fully understood. To face these misunderstandings and difficulties, we analysed surface waters during six recent flood events in the Lirou watershed (a karst tributary of the Lez, in South of France. Because of the specific chemical signature of karst waters, chemical analyses can supply information about water pathways and flood dynamics. Then, we used the dilution law to combine chemical results, flow data and field observations to assess the dynamics of the karst component of the flood. To end, we discussed the surface or karst origin of the waters responsible for the apparent runoff coefficient rise during flash karst flood.

  1. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model

    OpenAIRE

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-01-01

    This paper proposes a multi-level hierarchical model for the Tokay gecko (Gekko gecko) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid subst...

  2. Analysing the accuracy of pavement performance models in the short and long terms: GMDH and ANFIS methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, Timo

    2016-01-01

    The accuracy of pavement performance prediction is a critical part of pavement management and directly influences maintenance and rehabilitation strategies. Many models with various specifications have been proposed by researchers and used by agencies. This study presents nine variables affecting

  3. Landscaping analyses of the ROC predictions of discrete-slots and signal-detection models of visual working memory.

    Science.gov (United States)

    Donkin, Chris; Tran, Sophia Chi; Nosofsky, Robert

    2014-10-01

    A fundamental issue concerning visual working memory is whether its capacity limits are better characterized in terms of a limited number of discrete slots (DSs) or a limited amount of a shared continuous resource. Rouder et al. (2008) found that a mixed-attention, fixed-capacity, DS model provided the best explanation of behavior in a change detection task, outperforming alternative continuous signal detection theory (SDT) models. Here, we extend their analysis in two ways: first, with experiments aimed at better distinguishing between the predictions of the DS and SDT models, and second, using a model-based analysis technique called landscaping, in which the functional-form complexity of the models is taken into account. We find that the balance of evidence supports a DS account of behavior in change detection tasks but that the SDT model is best when the visual displays always consist of the same number of items. In our General Discussion section, we outline, but ultimately reject, a number of potential explanations for the observed pattern of results. We finish by describing future research that is needed to pinpoint the basis for this observed pattern of results.

  4. Assessment of the primary rotational stability of uncemented hip stems using an analytical model: comparison with finite element analyses.

    Science.gov (United States)

    Zeman, Maria E; Sauwen, Nicolas; Labey, Luc; Mulier, Michiel; Van der Perre, Georges; Jaecques, Siegfried V N

    2008-09-25

    Sufficient primary stability is a prerequisite for the clinical success of cementless implants. Therefore, it is important to have an estimation of the primary stability that can be achieved with new stem designs in a pre-clinical trial. Fast assessment of the primary stability is also useful in the preoperative planning of total hip replacements, and to an even larger extent in intraoperatively custom-made prosthesis systems, which result in a wide variety of stem geometries. An analytical model is proposed to numerically predict the relative primary stability of cementless hip stems. This analytical approach is based upon the principle of virtual work and a straightforward mechanical model. For five custom-made implant designs, the resistance against axial rotation was assessed through the analytical model as well as through finite element modelling (FEM). The analytical approach can be considered as a first attempt to theoretically evaluate the primary stability of hip stems without using FEM, which makes it fast and inexpensive compared to other methods. A reasonable agreement was found in the stability ranking of the stems obtained with both methods. However, due to the simplifying assumptions underlying the analytical model it predicts very rigid stability behaviour: estimated stem rotation was two to three orders of magnitude smaller, compared with the FEM results. Based on the results of this study, the analytical model might be useful as a comparative tool for the assessment of the primary stability of cementless hip stems.

  5. Assessment of the primary rotational stability of uncemented hip stems using an analytical model: Comparison with finite element analyses

    Directory of Open Access Journals (Sweden)

    Van der Perre Georges

    2008-09-01

    Full Text Available Abstract Background Sufficient primary stability is a prerequisite for the clinical success of cementless implants. Therefore, it is important to have an estimation of the primary stability that can be achieved with new stem designs in a pre-clinical trial. Fast assessment of the primary stability is also useful in the preoperative planning of total hip replacements, and to an even larger extent in intraoperatively custom-made prosthesis systems, which result in a wide variety of stem geometries. Methods An analytical model is proposed to numerically predict the relative primary stability of cementless hip stems. This analytical approach is based upon the principle of virtual work and a straightforward mechanical model. For five custom-made implant designs, the resistance against axial rotation was assessed through the analytical model as well as through finite element modelling (FEM. Results The analytical approach can be considered as a first attempt to theoretically evaluate the primary stability of hip stems without using FEM, which makes it fast and inexpensive compared to other methods. A reasonable agreement was found in the stability ranking of the stems obtained with both methods. However, due to the simplifying assumptions underlying the analytical model it predicts very rigid stability behaviour: estimated stem rotation was two to three orders of magnitude smaller, compared with the FEM results. Conclusion Based on the results of this study, the analytical model might be useful as a comparative tool for the assessment of the primary stability of cementless hip stems.

  6. A Growth Curve Model with Fractional Polynomials for Analysing Incomplete Time-Course Data in Microarray Gene Expression Studies

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Hjelmborg, Jacob v. B.; Clemmensen, Anders; Andersen, Klaus Ejner; Petersen, Thomas K.; McGue, Matthew; Christensen, Kaare; Kruse, Torben A.

    2011-01-01

    Identifying the various gene expression response patterns is a challenging issue in expression microarray time-course experiments. Due to heterogeneity in the regulatory reaction among thousands of genes tested, it is impossible to manually characterize a parametric form for each of the time-course pattern in a gene by gene manner. We introduce a growth curve model with fractional polynomials to automatically capture the various time-dependent expression patterns and meanwhile efficiently handle missing values due to incomplete observations. For each gene, our procedure compares the performances among fractional polynomial models with power terms from a set of fixed values that offer a wide range of curve shapes and suggests a best fitting model. After a limited simulation study, the model has been applied to our human in vivo irritated epidermis data with missing observations to investigate time-dependent transcriptional responses to a chemical irritant. Our method was able to identify the various nonlinear time-course expression trajectories. The integration of growth curves with fractional polynomials provides a flexible way to model different time-course patterns together with model selection and significant gene identification strategies that can be applied in microarray-based time-course gene expression experiments with missing observations. PMID:21966290

  7. Modeling the potential risk factors of bovine viral diarrhea prevalence in Egypt using univariable and multivariable logistic regression analyses

    Directory of Open Access Journals (Sweden)

    Abdelfattah M. Selim

    2018-03-01

    Full Text Available Aim: The present cross-sectional study was conducted to determine the seroprevalence and potential risk factors associated with Bovine viral diarrhea virus (BVDV disease in cattle and buffaloes in Egypt, to model the potential risk factors associated with the disease using logistic regression (LR models, and to fit the best predictive model for the current data. Materials and Methods: A total of 740 blood samples were collected within November 2012-March 2013 from animals aged between 6 months and 3 years. The potential risk factors studied were species, age, sex, and herd location. All serum samples were examined with indirect ELIZA test for antibody detection. Data were analyzed with different statistical approaches such as Chi-square test, odds ratios (OR, univariable, and multivariable LR models. Results: Results revealed a non-significant association between being seropositive with BVDV and all risk factors, except for species of animal. Seroprevalence percentages were 40% and 23% for cattle and buffaloes, respectively. OR for all categories were close to one with the highest OR for cattle relative to buffaloes, which was 2.237. Likelihood ratio tests showed a significant drop of the -2LL from univariable LR to multivariable LR models. Conclusion: There was an evidence of high seroprevalence of BVDV among cattle as compared with buffaloes with the possibility of infection in different age groups of animals. In addition, multivariable LR model was proved to provide more information for association and prediction purposes relative to univariable LR models and Chi-square tests if we have more than one predictor.

  8. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    DEFF Research Database (Denmark)

    Kogelman, Lisette; Pant, Sameer Dinkar; Fredholm, Merete

    2014-01-01

    Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome...... of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA) analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation...... of haplotype blocks. We built Weighted Interaction SNP Hub (WISH) and differentially wired (DW) networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment...

  9. Complex patterns of divergence among green-sensitive (RH2a African cichlid opsins revealed by Clade model analyses

    Directory of Open Access Journals (Sweden)

    Weadick Cameron J

    2012-10-01

    Full Text Available Abstract Background Gene duplications play an important role in the evolution of functional protein diversity. Some models of duplicate gene evolution predict complex forms of paralog divergence; orthologous proteins may diverge as well, further complicating patterns of divergence among and within gene families. Consequently, studying the link between protein sequence evolution and duplication requires the use of flexible substitution models that can accommodate multiple shifts in selection across a phylogeny. Here, we employed a variety of codon substitution models, primarily Clade models, to explore how selective constraint evolved following the duplication of a green-sensitive (RH2a visual pigment protein (opsin in African cichlids. Past studies have linked opsin divergence to ecological and sexual divergence within the African cichlid adaptive radiation. Furthermore, biochemical and regulatory differences between the RH2aα and RH2aβ paralogs have been documented. It thus seems likely that selection varies in complex ways throughout this gene family. Results Clade model analysis of African cichlid RH2a opsins revealed a large increase in the nonsynonymous-to-synonymous substitution rate ratio (ω following the duplication, as well as an even larger increase, one consistent with positive selection, for Lake Tanganyikan cichlid RH2aβ opsins. Analysis using the popular Branch-site models, by contrast, revealed no such alteration of constraint. Several amino acid sites known to influence spectral and non-spectral aspects of opsin biochemistry were found to be evolving divergently, suggesting that orthologous RH2a opsins may vary in terms of spectral sensitivity and response kinetics. Divergence appears to be occurring despite intronic gene conversion among the tandemly-arranged duplicates. Conclusions Our findings indicate that variation in selective constraint is associated with both gene duplication and divergence among orthologs in African

  10. Review of models used in economic analyses of new oral treatments for type 2 diabetes mellitus.

    Science.gov (United States)

    Asche, Carl V; Hippler, Stephen E; Eurich, Dean T

    2014-01-01

    Economic models are considered to be important, as they help evaluate the long-term impact of diabetes treatment. To date, it appears that no article has reviewed and critically appraised the cost-effectiveness models developed to evaluate new oral treatments [glucagon-like peptide-1 (GLP-1) receptor agonists and dipeptidyl peptidase-4 (DPP-4) inhibitors] for type 2 diabetes mellitus (T2DM). This study aimed to provide insight into the utilization of cost-effectiveness modelling methods. The focus of our study was aimed at the applicability of these models, particularly around the major assumptions related to the clinical parameters (glycated haemoglobin [A1c], systolic blood pressure [SBP], lipids and weight) used in the models, and subsequent clinical outcomes. MEDLINE and EMBASE were searched from 1 January 2004 to 14 February 2013 in order to identify published cost-effectiveness evaluations for the treatment of T2DM by new oral treatments (GLP-1 receptor agonists and DPP-4 inhibitors). Once identified, the articles were reviewed and grouped together according to the type of model. The following data were captured for each study: comparators; country; evaluation and key cost drivers; time horizon; perspective; discounting rates; currency/year; cost-effectiveness threshold, sensitivity analysis; and cost-effectiveness analysis curves. A total of 15 studies were identified in our review. Nearly all of the models utilized a health care payer perspective and provided a lifetime horizon. The CORE Diabetes Model, UK Prospective Diabetes Study (UKPDS) Outcomes Model, Cardiff Diabetes Model, Centers for Disease Control and Prevention (CDC) Diabetes Cost-Effectiveness Group Model and Diabetes Mellitus Model were cited. With the exception of two studies, all of the studies made significant assumptions surrounding the impact of GLP-1 receptor agonists or DPP-4 inhibitors on clinical parameters and subsequent short- and long-term outcomes. Moreover, often the differences

  11. Analysis and modelling of the fuels european market; Analyse et modelisation des prix des produits petroliers combustibles en europe

    Energy Technology Data Exchange (ETDEWEB)

    Simon, V

    1999-04-01

    The research focus on the European fuel market prices referring to the Rotterdam and Genoa spot markets as well the German, Italian and French domestic markets. The thesis try to explain the impact of the London IPE future market on spot prices too. The mainstream research has demonstrated that co-integration seems to be the best theoretical approach to investigate the long run equilibrium relations. A particular attention will be devoted to the structural change in the econometric modelling on these equilibriums. A deep analysis of the main European petroleum products markets permit a better model specification concerning each of these markets. Further, we will test if any evidence of relations between spot and domestic prices could be confirmed. Finally, alternative scenarios will be depicted to forecast prices in the petroleum products markets. The objective is to observe the model reaction to changes crude oil prices. (author)

  12. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Science.gov (United States)

    Aihara, Ikkyu; Fujioka, Emyo; Hiryu, Shizuko

    2013-01-01

    Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon) pursued a moving moth (Goniocraspidum pryeri) in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  13. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Directory of Open Access Journals (Sweden)

    Ikkyu Aihara

    Full Text Available Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon pursued a moving moth (Goniocraspidum pryeri in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  14. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    Science.gov (United States)

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.

  15. Mind the gaps: a state-space model for analysing the dynamics of North Sea herring spawning components

    DEFF Research Database (Denmark)

    Payne, Mark

    2010-01-01

    , the sum of the fitted abundance indices across all components proves an excellent proxy for the biomass of the total stock, even though the model utilizes information at the individual-component level. The Orkney–Shetland component appears to have recovered faster from historic depletion events than...... the other components, whereas the Downs component has been the slowest. These differences give rise to changes in stock composition, which are shown to vary widely within a relatively short time. The modelling framework provides a valuable tool for studying and monitoring the dynamics of the individual...

  16. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available This is the second part of a two-part paper considering network design based on a Lagrangian stochastic particle dispersion model (LPDM), aimed at reducing the uncertainty of the flux estimates achievable for the region of interest by the continuous...

  17. The mental health care model in Brazil: analyses of the funding, governance processes, and mechanisms of assessment

    Directory of Open Access Journals (Sweden)

    Thiago Lavras Trapé

    Full Text Available ABSTRACT OBJECTIVE This study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment. METHODS We have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014. RESULTS This is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field. CONCLUSIONS There is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded.

  18. Occupant-level injury severity analyses for taxis in Hong Kong: A Bayesian space-time logistic model.

    Science.gov (United States)

    Meng, Fanyu; Xu, Pengpeng; Wong, S C; Huang, Helai; Li, Y C

    2017-11-01

    This study aimed to identify the factors affecting the crash-related severity level of injuries in taxis and quantify the associations between these factors and taxi occupant injury severity. Casualties resulting from taxi crashes from 2004 to 2013 in Hong Kong were divided into four categories: taxi drivers, taxi passengers, private car drivers and private car passengers. To avoid any biased interpretation caused by unobserved spatial and temporal effects, a Bayesian hierarchical logistic modeling approach with conditional autoregressive priors was applied, and four different model forms were tested. For taxi drivers and passengers, the model with space-time interaction was proven to most properly address the unobserved heterogeneity effects. The results indicated that time of week, number of vehicles involved, weather, point of impact and driver age were closely associated with taxi drivers' injury severity level in a crash. For taxi passengers' injury severity an additional factor, taxi service area, was influential. To investigate the differences between taxis and other traffic, similar models were established for private car drivers and passengers. The results revealed that although location in the network and driver gender significantly influenced private car drivers' injury severity, they did not influence taxi drivers' injury severity. Compared with taxi passengers, the injury severity of private car passengers was more sensitive to average speed and whether seat belts were worn. Older drivers, urban taxis and fatigued driving were identified as factors that increased taxi occupant injury severity in Hong Kong. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Radiation transport analyses for IFMIF design by the Attila software using a Monte-Carlo source model

    International Nuclear Information System (INIS)

    Arter, W.; Loughlin, M.J.

    2009-01-01

    Accurate calculation of the neutron transport through the shielding of the IFMIF test cell, defined by CAD, is a difficult task for several reasons. The ability of the powerful deterministic radiation transport code Attila, to do this rapidly and reliably has been studied. Three models of increasing geometrical complexity were produced from the CAD using the CADfix software. A fourth model was produced to represent transport within the cell. The work also involved the conversion of the Vitenea-IEF database for high energy neutrons into a format usable by Attila, and the conversion of a particle source specified in MCNP wssaformat to a form usable by Attila. The final model encompassed the entire test cell environment, with only minor modifications. On a state-of-the-art PC, Attila took approximately 3 h to perform the calculations, as a consequence of a careful mesh 'layering'. The results strongly suggest that Attila will be a valuable tool for modelling radiation transport in IFMIF, and for similar problems

  20. A mathematical high bar-human body model for analysing and interpreting mechanical-energetic processes on the high bar.

    Science.gov (United States)

    Arampatzis, A; Brüggemann, G P

    1998-12-01

    The aims of this study were: 1. To study the transfer of energy between the high bar and the gymnast. 2. To develop criteria from the utilisation of high bar elasticity and the utilisation of muscle capacity to assess the effectiveness of a movement solution. 3. To study the influence of varying segment movement upon release parameters. For these purposes a model of the human body attached to the high bar (high bar-human body model) was developed. The human body was modelled using a 15-segment body system. The joint-beam element method (superelement) was employed for modelling the high bar. A superelement consists of four rigid segments connected by joints (two Cardan joints and one rotational-translational joint) and springs (seven rotation springs and one tension-compression spring). The high bar was modelled using three superelements. The input data required for the high bar human body model were collected with video-kinematographic (50 Hz) and dynamometric (500 Hz) techniques. Masses and moments of inertia of the 15 segments were calculated using the data from the Zatsiorsky et al. (1984) model. There are two major phases characteristic of the giant swing prior to dismounts from the high bar. In the first phase the gymnast attempts to supply energy to the high bar-humanbody system through muscle activity and to store this energy in the high bar. The difference between the energy transferred to the high bar and the reduction in the total energy of the body could be adopted as a criterion for the utilisation of high bar elasticity. The energy previously transferred into the high bar is returned to the body during the second phase. An advantageous increase in total body energy at the end of the exercise could only be obtained through muscle energy supply. An index characterising the utilisation of muscle capacity was developed out of the difference between the increase in total body energy and the energy returned from the high bar. A delayed and initially slow but

  1. Effects of wintertime atmospheric river landfalls on surface air temperatures in the Western US: Analyses and model evaluation

    Science.gov (United States)

    Kim, J.; Guan, B.; Waliser, D. E.; Ferraro, R.

    2016-12-01

    Landfalling atmospheric rivers (ARs) affect the wintertime surface air temperatures as shown in earlier studies. The AR-related surface air temperatures can exert significant influence on the hydrology in the US Pacific coast region especially through rainfall-snowfall partitioning and the snowpack in high elevation watersheds as they are directly related with the freezing-level altitudes. These effects of temperature perturbations can in turn affect hydrologic events of various time scales such as flash flooding by the combined effects of rainfall and snowmelt, and the warm season runoff from melting snowpack, especially in conjunction with the AR effects on winter precipitation and rain-on-snow events in WUS. Thus, understanding the effects of AR landfalls on the surface temperatures and examining the capability of climate models in simulating these effects are an important practical concern for WUS. This study aims to understand the effects of AR landfalls on the characteristics of surface air temperatures in WUS, especially seasonal means and PDFs and to evaluate the fidelity of model data produced in the NASA downscaling experiment for the 10 winters from Nov. 1999 to Mar. 2010 using an AR-landfall chronology based on the vertically-integrated water vapor flux calculated from the MERRA2 reanalysis. Model skill is measured using metrics including regional means, a skill score based on correlations and mean-square errors, the similarity between two PDF shapes, and Taylor diagrams. Results show that the AR landfalls are related with higher surface air temperatures in WUS, especially in inland regions. The AR landfalls also reduce the range of surface air temperature PDF, largely by reducing the events in the lower temperature range. The shift in the surface air temperature PDF is consistent with the positive anomalies in the winter-mean temperature. Model data from the NASA downscaling experiment reproduce the AR effects on the temperature PDF, at least

  2. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)

    2013-02-15

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a

  3. Entropic potential field formed for a linear-motor protein near a filament: Statistical-mechanical analyses using simple models

    Science.gov (United States)

    Amano, Ken-ichi; Yoshidome, Takashi; Iwaki, Mitsuhiro; Suzuki, Makoto; Kinoshita, Masahiro

    2010-07-01

    We report a new progress in elucidating the mechanism of the unidirectional movement of a linear-motor protein (e.g., myosin) along a filament (e.g., F-actin). The basic concept emphasized here is that a potential field is entropically formed for the protein on the filament immersed in solvent due to the effect of the translational displacement of solvent molecules. The entropic potential field is strongly dependent on geometric features of the protein and the filament, their overall shapes as well as details of the polyatomic structures. The features and the corresponding field are judiciously adjusted by the binding of adenosine triphosphate (ATP) to the protein, hydrolysis of ATP into adenosine diphosphate (ADP)+Pi, and release of Pi and ADP. As the first step, we propose the following physical picture: The potential field formed along the filament for the protein without the binding of ATP or ADP+Pi to it is largely different from that for the protein with the binding, and the directed movement is realized by repeated switches from one of the fields to the other. To illustrate the picture, we analyze the spatial distribution of the entropic potential between a large solute and a large body using the three-dimensional integral equation theory. The solute is modeled as a large hard sphere. Two model filaments are considered as the body: model 1 is a set of one-dimensionally connected large hard spheres and model 2 is a double helical structure formed by two sets of connected large hard spheres. The solute and the filament are immersed in small hard spheres forming the solvent. The major findings are as follows. The solute is strongly confined within a narrow space in contact with the filament. Within the space there are locations with sharply deep local potential minima along the filament, and the distance between two adjacent locations is equal to the diameter of the large spheres constituting the filament. The potential minima form a ringlike domain in model 1

  4. The generic MESSy submodel TENDENCY (v1.0 for process-based analyses in Earth system models

    Directory of Open Access Journals (Sweden)

    R. Eichinger

    2014-07-01

    Full Text Available The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels to the TENDENCY submodel itself. In this way, a record of the tendencies of all process–prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective

  5. Strategic Marketing for Indonesian Plywood Industry: An Analyse by using Porter Five Forces Model and Generic Strategy Framework

    OpenAIRE

    Makkarennu; Nakayasu, A.; Osozawa, K.; Ichikawa, M.

    2014-01-01

    The target for a marketing strategy is to find a way of achieving a sustainable competitive advantage over the other competing products and firms in a market.Good strategy serves as a road map for effective action. Porter???s five forces model and three generic strategies were used to evaluate the structure and the strategy for positioning of plywood industry in South Sulawesi, Indonesia. Qualitative research was carried out by using in-depth interview method. Having expressed either agree...

  6. Defining the Transfer Functions of the PCAD Model in North Atlantic Right Whales (Eubalaena glacialis) - Retrospective Analyses of Existing Data

    Science.gov (United States)

    2012-09-30

    Prescribed by ANSI Std Z39-18 2 health assessments), Peter Corkeron, National Marine Fisheries Service (statistics), Kathleen Hunt ( endocrinology ...each adult female right whale transitioning between three possible states – Pregnancy , Lactation and Resting. Only female whales with at least 20...whale (Eubalaena glacialis). General and Comparative Endocrinology 148:260-272. Kuhn, M. 2008. Building predictive models in R using the caret package

  7. HCV kinetic and modeling analyses indicate similar time to cure among sofosbuvir combination regimens with daclatasvir, simeprevir or ledipasvir.

    Science.gov (United States)

    Dahari, Harel; Canini, Laetitia; Graw, Frederik; Uprichard, Susan L; Araújo, Evaldo S A; Penaranda, Guillaume; Coquet, Emilie; Chiche, Laurent; Riso, Aurelie; Renou, Christophe; Bourliere, Marc; Cotler, Scott J; Halfon, Philippe

    2016-06-01

    Recent clinical trials of direct-acting-antiviral agents (DAAs) against hepatitis C virus (HCV) achieved >90% sustained virological response (SVR) rates, suggesting that cure often took place before the end of treatment (EOT). We sought to evaluate retrospectively whether early response kinetics can provide the basis to individualize therapy to achieve optimal results while reducing duration and cost. 58 chronic HCV patients were treated with 12-week sofosbuvir+simeprevir (n=19), sofosbuvir+daclatasvir (n=19), or sofosbuvir+ledipasvir in three French referral centers. HCV was measured at baseline, day 2, every other week, EOT and 12weeks post EOT. Mathematical modeling was used to predict the time to cure, i.e., <1 virus copy in the entire extracellular body fluid. All but one patient who relapsed achieved SVR. Mean age was 60±11years, 53% were male, 86% HCV genotype-1, 9% HIV coinfected, 43% advanced fibrosis (F3), and 57% had cirrhosis. At weeks 2, 4 and 6, 48%, 88% and 100% of patients had HCV<15IU/ml, with 27%, 74% and 91% of observations having target not detected, respectively. Modeling results predicted that 23 (43%), 16 (30%), 7 (13%), 5 (9%) and 3 (5%) subjects were predicted to reach cure within 6, 8, 10, 12 and 13weeks of therapy, respectively. The modeling suggested that the patient who relapsed would have benefitted from an additional week of sofosbuvir+ledipasvir. Adjusting duration of treatment according to the modeling predicts reduced medication costs of 43-45% and 17-30% in subjects who had HCV<15IU/ml at weeks 2 and 4, respectively. The use of early viral kinetic analysis has the potential to individualize duration of DAA therapy with a projected average cost saving of 16-20% per 100-treated persons. Copyright © 2016 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  8. Modelling software failures of digital I and C in probabilistic safety analyses based on the TELEPERM registered XS operating experience

    International Nuclear Information System (INIS)

    Jockenhoevel-Barttfeld, Mariana; Taurines Andre; Baeckstroem, Ola; Holmberg, Jan-Erik; Porthin, Markus; Tyrvaeinen, Tero

    2015-01-01

    Digital instrumentation and control (I and C) systems appear as upgrades in existing nuclear power plants (NPPs) and in new plant designs. In order to assess the impact of digital system failures, quantifiable reliability models are needed along with data for digital systems that are compatible with existing probabilistic safety assessments (PSA). The paper focuses on the modelling of software failures of digital I and C systems in probabilistic assessments. An analysis of software faults, failures and effects is presented to derive relevant failure modes of system and application software for the PSA. The estimations of software failure probabilities are based on an analysis of the operating experience of TELEPERM registered XS (TXS). For the assessment of application software failures the analysis combines the use of the TXS operating experience at an application function level combined with conservative engineering judgments. Failure probabilities to actuate on demand and of spurious actuation of typical reactor protection application are estimated. Moreover, the paper gives guidelines for the modelling of software failures in the PSA. The strategy presented in this paper is generic and can be applied to different software platforms and their applications.

  9. Bridging Research and Policy in Energy Transition. Contributing to shape energy and climate policies through economic modelling and analyses

    International Nuclear Information System (INIS)

    Paugam, Anne; Giraud, Gael; Thauvin, Eric

    2015-11-01

    The growth model of the 20. century relied heavily on the exploitation of fossil energy and natural resources extracted at low cost. Yet, the depletion of these resources, the upward trend of their prices over the long term and the consequences of their use for the environment and climate are now challenging the sustainability of this model. The notion of energy transition is directed at rethinking the use of energy resources and natural capital to reach an economic growth that mitigates negative environmental effects, without sacrificing the well-being of populations. Turning this idea into action is a challenging task. AFD has designed and funded research and technical cooperation projects in order to inform decisions on the short-term cost and long-term impact of measures designed to accelerate the transition to low-carbon energy regimes. Using tools for empirical economic analysis (particularly 'economy-energy' models), these projects have been carried out in several intervention settings, including South Africa, China and Mexico, which are discussed in this paper

  10. A finite-volume model of a parabolic trough photovoltaic/thermal collector: Energetic and exergetic analyses

    International Nuclear Information System (INIS)

    Calise, Francesco; Palombo, Adolfo; Vanoli, Laura

    2012-01-01

    This paper presents a detailed finite-volume model of a concentrating photovoltaic/thermal (PVT) solar collector. The PVT solar collector consists in a parabolic trough concentrator and a linear triangular receiver. The bottom surfaces of the triangular receiver are equipped with triple-junction cells whereas the top surface is covered by an absorbing surface. The cooling fluid (water) flows inside a channel along the longitudinal direction of the PVT collector. The system was discretized along its axis and, for each slice of the discretized computational domain, mass and energy balances were considered. The model allows one to evaluate both thermodynamic and electrical parameters along the axis of the PVT collector. Then, for each slice of the computational domain, exergy balances were also considered in order to evaluate the corresponding exergy destruction rate and exergetic efficiency. Therefore, the model also calculates the magnitude of the irreversibilities inside the collector and it allows one to detect where these irreversibilities occur. A sensitivity analysis is also performed with the scope to evaluate the effect of the variation of the main design/environmental parameters on the energetic and exergetic performance of the PVT collector. -- Highlights: ► The paper investigates an innovative concentrating photovoltaic thermal solar collector. ► The collector is equipped with triple-junction photovoltaic layers. ► A local exergetic analysis is performed in order to detect sources of irreversibilities. ► Irreversibilities are mainly due to the heat transfer between sun and PVT collector.

  11. Respiratory system model for quasistatic pulmonary pressure-volume (P-V) curve: inflation-deflation loop analyses.

    Science.gov (United States)

    Amini, R; Narusawa, U

    2008-06-01

    A respiratory system model (RSM) is developed for the deflation process of a quasistatic pressure-volume (P-V) curve, following the model for the inflation process reported earlier. In the RSM of both the inflation and the deflation limb, a respiratory system consists of a large population of basic alveolar elements, each consisting of a piston-spring-cylinder subsystem. A normal distribution of the basic elements is derived from Boltzmann statistical model with the alveolar closing (opening) pressure as the distribution parameter for the deflation (inflation) process. An error minimization by the method of least squares applied to existing P-V loop data from two different data sources confirms that a simultaneous inflation-deflation analysis is required for an accurate determination of RSM parameters. Commonly used terms such as lower inflection point, upper inflection point, and compliance are examined based on the P-V equations, on the distribution function, as well as on the geometric and physical properties of the basic alveolar element.

  12. Model analyses of atmospheric mercury: present air quality and effects of transpacific transport on the United States

    Science.gov (United States)

    Lei, H.; Liang, X.-Z.; Wuebbles, D. J.; Tao, Z.

    2013-11-01

    Atmospheric mercury is a toxic air and water pollutant that is of significant concern because of its effects on human health and ecosystems. A mechanistic representation of the atmospheric mercury cycle is developed for the state-of-the-art global climate-chemistry model, CAM-Chem (Community Atmospheric Model with Chemistry). The model simulates the emission, transport, transformation and deposition of atmospheric mercury (Hg) in three forms: elemental mercury (Hg(0)), reactive mercury (Hg(II)), and particulate mercury (PHg). Emissions of mercury include those from human, land, ocean, biomass burning and volcano related sources. Land emissions are calculated based on surface solar radiation flux and skin temperature. A simplified air-sea mercury exchange scheme is used to calculate emissions from the oceans. The chemistry mechanism includes the oxidation of Hg(0) in gaseous phase by ozone with temperature dependence, OH, H2O2 and chlorine. Aqueous chemistry includes both oxidation and reduction of Hg(0). Transport and deposition of mercury species are calculated through adapting the original formulations in CAM-Chem. The CAM-Chem model with mercury is driven by present meteorology to simulate the present mercury air quality during the 1999-2001 period. The resulting surface concentrations of total gaseous mercury (TGM) are then compared with the observations from worldwide sites. Simulated wet depositions of mercury over the continental United States are compared to the observations from 26 Mercury Deposition Network stations to test the wet deposition simulations. The evaluations of gaseous concentrations and wet deposition confirm a strong capability for the CAM-Chem mercury mechanism to simulate the atmospheric mercury cycle. The general reproduction of global TGM concentrations and the overestimation on South Africa indicate that model simulations of TGM are seriously affected by emissions. The comparison to wet deposition indicates that wet deposition patterns

  13. Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education

    Directory of Open Access Journals (Sweden)

    Maria Åström

    2012-06-01

    Full Text Available The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics, integrated (with Science or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation.

  14. Generelle aspekter ved mediereception? – Et bud på en multidimensional model for analyse af kvalitative receptionsinterviews

    Directory of Open Access Journals (Sweden)

    Kim Schrøder

    2003-09-01

    Full Text Available Findes der generelle aspekter ved receptionen af medieprodukter, som det kan være analytisk frugtbart at orientere sig efter, og som man altid bør belyse, når man analyserer kvalitative receptionsdata – og måske også allerede når man skal planlægge det empiriske feltarbejde i et em- pirisk receptionsprojekt? Denne artikel bygger på, at dette spørgsmål kan besvares bekræftende, og fremlægger et bud på, hvordan en multi- dimensional model for kvalitativ receptionsanalyse kunne se ud.

  15. Hydrogeochemical Processes of Groundwater Using Multivariate Statistical Analyses and Inverse Geochemical Modeling in Samrak Park of Nakdong River Basin, Korea

    Science.gov (United States)

    Chung, Sang Yong

    2015-04-01

    Multivariate statistical methods and inverse geochemical modelling were used to assess the hydrogeochemical processes of groundwater in Nakdong River basin. The study area is located in a part of Nakdong River basin, the Busan Metropolitan City, Kora. Quaternary deposits forms Samrak Park region and are underlain by intrusive rocks of Bulkuksa group and sedimentary rocks of Yucheon group in the Cretaceous Period. The Samrak park region is acting as two aquifer systems of unconfined aquifer and confined aquifer. The unconfined aquifer consists of upper sand, and confined aquifer is comprised of clay, lower sand, gravel, weathered rock. Porosity and hydraulic conductivity of the area is 37 to 59% and 1.7 to 200m/day, respectively. Depth of the wells ranges from 9 to 77m. Piper's trilinear diagram, CaCl2 type was useful for unconfined aquifer and NaCl type was dominant for confined aquifer. By hierarchical cluster analysis (HCA), Group 1 and Group 2 are fully composed of unconfined aquifer and confined aquifer, respectively. In factor analysis (FA), Factor 1 is described by the strong loadings of EC, Na, K, Ca, Mg, Cl, HCO3, SO4 and Si, and Factor 2 represents the strong loadings of pH and Al. Base on the Gibbs diagram, the unconfined and confined aquifer samples are scattered discretely in the rock and evaporation areas. The principal hydrogeochemical processes occurring in the confined and unconfined aquifers are the ion exchange due to the phenomena of freshening under natural recharge and water-rock interactions followed by evaporation and dissolution. The saturation index of minerals such as Ca-montmorillonite, dolomite and calcite represents oversaturated, and the albite, gypsum and halite show undersaturated. Inverse geochemical modeling using PHREEQC code demonstrated that relatively few phases were required to derive the differences in groundwater chemistry along the flow path in the area. It also suggested that dissolution of carbonate and ion exchange

  16. A model to analyse the flow of an incompressible Newtonian fluid through a rigid, homogeneous, isotropic and infinite porous medium

    International Nuclear Information System (INIS)

    Gama, R.M.S. da; Sampaio, R.

    1985-01-01

    The flow of an incompressible Newtonian fluid through a rigid, homogeneous, isotropic and infinite porous medium which has a given inicial distribuition of the mentioned fluid, is analyzed. It is proposed a model that assumes that the motion is caused by concentration gradient, but it does not consider the friction between the porous medium and the fluid. We solve an onedimensional case where the mathematical problem is reduced to the solution of a non-linear hyperbolic system of differential equations, subjected to an inicial condition given by a step function, called 'Riemann Problem'. (Author) [pt

  17. A Simple Object-Oriented and Open Source Model for Scientific and Policy Analyses of the Global Carbon Cycle-Hector

    Science.gov (United States)

    Hartin, C.; Bond-Lamberty, B. P.; Patel, P.; Link, R. P.

    2014-12-01

    Simple climate models play an integral role in policy and scientific communities. They are used in climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe, Hector an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global scale earth system processes, e.g., carbon fluxes between the ocean and atmosphere, and respiration and primary production on land. Hector has three main carbon pools: an atmosphere, land, and ocean. The terrestrial carbon cycle is represented by a simple design with respiration and primary production, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. The ocean carbon cycle actively solves the inorganic carbon system in the surface ocean, directly calculating air-sea fluxes of carbon and ocean pH. Hector reproduces the large-scale global trends found in historical data of atmospheric [CO2] and surface temperature and simulates all four Representative Concentration Pathways. Hector's results compare well with current observations of critical climate variables, MAGICC (a well-known simple climate model), as well as, model output from the Coupled Model Intercomparison Project version 5. Hector has the ability to be a key analytical tool used across many scientific and policy communities due to its modern software architecture, open source, and object-oriented structure. In particular, Hector can be used to emulate larger complex models to help fill gaps in scenario coverage for future scenario processes.

  18. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    Science.gov (United States)

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. 4Cin: A computational pipeline for 3D genome modeling and virtual Hi-C analyses from 4C data.

    Science.gov (United States)

    Irastorza-Azcarate, Ibai; Acemel, Rafael D; Tena, Juan J; Maeso, Ignacio; Gómez-Skarmeta, José Luis; Devos, Damien P

    2018-03-01

    The use of 3C-based methods has revealed the importance of the 3D organization of the chromatin for key aspects of genome biology. However, the different caveats of the variants of 3C techniques have limited their scope and the range of scientific fields that could benefit from these approaches. To address these limitations, we present 4Cin, a method to generate 3D models and derive virtual Hi-C (vHi-C) heat maps of genomic loci based on 4C-seq or any kind of 4C-seq-like data, such as those derived from NG Capture-C. 3D genome organization is determined by integrative consideration of the spatial distances derived from as few as four 4C-seq experiments. The 3D models obtained from 4C-seq data, together with their associated vHi-C maps, allow the inference of all chromosomal contacts within a given genomic region, facilitating the identification of Topological Associating Domains (TAD) boundaries. Thus, 4Cin offers a much cheaper, accessible and versatile alternative to other available techniques while providing a comprehensive 3D topological profiling. By studying TAD modifications in genomic structural variants associated to disease phenotypes and performing cross-species evolutionary comparisons of 3D chromatin structures in a quantitative manner, we demonstrate the broad potential and novel range of applications of our method.

  20. Whole Genome and Global Gene Expression Analyses of the Model Mushroom Flammulina velutipes Reveal a High Capacity for Lignocellulose Degradation

    Science.gov (United States)

    Park, Young-Jin; Baek, Jeong Hun; Lee, Seonwook; Kim, Changhoon; Rhee, Hwanseok; Kim, Hyungtae; Seo, Jeong-Sun; Park, Hae-Ran; Yoon, Dae-Eun; Nam, Jae-Young; Kim, Hong-Il; Kim, Jong-Guk; Yoon, Hyeokjun; Kang, Hee-Wan; Cho, Jae-Yong; Song, Eun-Sung; Sung, Gi-Ho; Yoo, Young-Bok; Lee, Chang-Soo; Lee, Byoung-Moo; Kong, Won-Sik

    2014-01-01

    Flammulina velutipes is a fungus with health and medicinal benefits that has been used for consumption and cultivation in East Asia. F. velutipes is also known to degrade lignocellulose and produce ethanol. The overlapping interests of mushroom production and wood bioconversion make F. velutipes an attractive new model for fungal wood related studies. Here, we present the complete sequence of the F. velutipes genome. This is the first sequenced genome for a commercially produced edible mushroom that also degrades wood. The 35.6-Mb genome contained 12,218 predicted protein-encoding genes and 287 tRNA genes assembled into 11 scaffolds corresponding with the 11 chromosomes of strain KACC42780. The 88.4-kb mitochondrial genome contained 35 genes. Well-developed wood degrading machinery with strong potential for lignin degradation (69 auxiliary activities, formerly FOLymes) and carbohydrate degradation (392 CAZymes), along with 58 alcohol dehydrogenase genes were highly expressed in the mycelium, demonstrating the potential application of this organism to bioethanol production. Thus, the newly uncovered wood degrading capacity and sequential nature of this process in F. velutipes, offer interesting possibilities for more detailed studies on either lignin or (hemi-) cellulose degradation in complex wood substrates. The mutual interest in wood degradation by the mushroom industry and (ligno-)cellulose biomass related industries further increase the significance of F. velutipes as a new model. PMID:24714189

  1. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Directory of Open Access Journals (Sweden)

    S. Lee

    2018-01-01

    Full Text Available Water quality problems in the Chesapeake Bay Watershed (CBW are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2 concentration (590 and 850 ppm, precipitation increase (11 and 21 %, and temperature increase (2.9 and 5.0 °C, based on regional general circulation model (GCM projections. Further, we considered the ensemble of five GCM projections (2085–2098 under the Representative Concentration Pathway (RCP 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of  ∼  70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a

  2. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-04-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. However there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) using emergent constraints provided by marginal returns on investment for C and/or N allocation. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. Simulated N fixation at steady-state, calculated based on relative demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C : N, while a more recently reported non-linear relationship performed better. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP

  3. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Science.gov (United States)

    Lee, Sangchul; Yeo, In-Young; Sadeghi, Ali M.; McCarty, Gregory W.; Hively, Wells D.; Lang, Megan W.; Sharifi, Amir

    2018-01-01

    Water quality problems in the Chesapeake Bay Watershed (CBW) are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT) model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2 concentration (590 and 850 ppm), precipitation increase (11 and 21 %), and temperature increase (2.9 and 5.0 °C), based on regional general circulation model (GCM) projections. Further, we considered the ensemble of five GCM projections (2085-2098) under the Representative Concentration Pathway (RCP) 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets) and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of ˜ 70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a greater increased rate of 5.2 kg N ha-1 in

  4. Improved Analyses and Forecasts of Snowpack, Runoff and Drought through Remote Sensing and Land Surface Modeling in Southeastern Europe

    Science.gov (United States)

    Matthews, D.; Brilly, M.; Gregoric, G.; Polajnar, J.; Kobold, M.; Zagar, M.; Knoblauch, H.; Staudinger, M.; Mecklenburg, S.; Lehning, M.; Schweizer, J.; Balint, G.; Cacic, I.; Houser, P.; Pozzi, W.

    2008-12-01

    European hydrometeorological services and research centers are faced with increasing challenges from extremes of weather and climate that require significant investments in new technology and better utilization of existing human and natural resources to provide improved forecasts. Major advances in remote sensing, observation networks, data assimilation, numerical modeling, and communications continue to improve our ability to disseminate information to decision-makers and stake holders. This paper identifies gaps in current technologies, key research and decision-maker teams, and recommends means for moving forward through focused applied research and integration of results into decision support tools. This paper reports on the WaterNet - NASA Water Cycle Solutions Network contacts in Europe and summarizes progress in improving water cycle related decision-making using NASA research results. Products from the Hydrologic Sciences Branch, Goddard Space Flight Center, NASA, Land Information System's (LIS) Land Surface Models (LSM), the SPoRT, CREW , and European Space Agency (ESA), and Joint Research Center's (JRC) natural hazards products, and Swiss Federal Institute for Snow and Avalanche Research's (SLF), and others are discussed. They will be used in collaboration with the ESA and the European Commission to provide solutions for improved prediction of water supplies and stream flow, and droughts and floods, and snow avalanches in the major river basins serviced by EARS, ZAMG, SLF, Vituki Consult, and other European forecast centers. This region of Europe includes the Alps and Carpathian Mountains and is an area of extreme topography with abrupt 2000 m mountains adjacent to the Adriatic Sea. These extremes result in the highest precipitation ( > 5000 mm) in Europe in Montenegro and low precipitation of 300-400 mm at the mouth of the Danube during droughts. The current flood and drought forecasting systems have a spatial resolution of 9 km, which is currently being

  5. Analysing recent socioeconomic trends in coronary heart disease mortality in England, 2000-2007: a population modelling study.

    Directory of Open Access Journals (Sweden)

    Madhavi Bajekal

    Full Text Available Coronary heart disease (CHD mortality in England fell by approximately 6% every year between 2000 and 2007. However, rates fell differentially between social groups with inequalities actually widening. We sought to describe the extent to which this reduction in CHD mortality was attributable to changes in either levels of risk factors or treatment uptake, both across and within socioeconomic groups.A widely used and replicated epidemiological model was used to synthesise estimates stratified by age, gender, and area deprivation quintiles for the English population aged 25 and older between 2000 and 2007. Mortality rates fell, with approximately 38,000 fewer CHD deaths in 2007. The model explained about 86% (95% uncertainty interval: 65%-107% of this mortality fall. Decreases in major cardiovascular risk factors contributed approximately 34% (21%-47% to the overall decline in CHD mortality: ranging from about 44% (31%-61% in the most deprived to 29% (16%-42% in the most affluent quintile. The biggest contribution came from a substantial fall in systolic blood pressure in the population not on hypertension medication (29%; 18%-40%; more so in deprived (37% than in affluent (25% areas. Other risk factor contributions were relatively modest across all social groups: total cholesterol (6%, smoking (3%, and physical activity (2%. Furthermore, these benefits were partly negated by mortality increases attributable to rises in body mass index and diabetes (-9%; -17% to -3%, particularly in more deprived quintiles. Treatments accounted for approximately 52% (40%-70% of the mortality decline, equitably distributed across all social groups. Lipid reduction (14%, chronic angina treatment (13%, and secondary prevention (11% made the largest medical contributions.The model suggests that approximately half the recent CHD mortality fall in England was attributable to improved treatment uptake. This benefit occurred evenly across all social groups. However

  6. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Science.gov (United States)

    Lee, Sangchul; Yeo, In-Young; Sadeghi, Ali M.; McCarty, Gregory W.; Hively, Wells; Lang, Megan W.; Sharifi, Amir

    2018-01-01

    Water quality problems in the Chesapeake Bay Watershed (CBW) are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT) model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2concentration (590 and 850 ppm), precipitation increase (11 and 21 %), and temperature increase (2.9 and 5.0 °C), based on regional general circulation model (GCM) projections. Further, we considered the ensemble of five GCM projections (2085–2098) under the Representative Concentration Pathway (RCP) 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets) and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of  ∼  70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a greater

  7. The mental health care model in Brazil: analyses of the funding, governance processes, and mechanisms of assessment.

    Science.gov (United States)

    Trapé, Thiago Lavras; Campos, Rosana Onocko

    2017-03-23

    This study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment. We have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014. This is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field. There is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded. Analisar o estágio atual do modelo de atenção à saúde mental do Sistema Único de Saúde, segundo seu financiamento, processos de governança e mecanismos de avaliação. Foi realizada uma análise documental de portarias, informes técnicos, relatórios de conferência, resoluções e decretos de 2009 a 2014. Trata-se de um momento de consolidação do modelo psicossocial, com ampliação da rede assistencial, inversão de financiamento para serviços comunitários com forte ênfase na área de crack e outras drogas. A saúde mental é uma área subfinanciada dentro do subfinanciamento crônico do Sistema Único de Saúde. O modelo de governança constrange o avanço de serviços essenciais, havendo a necessidade da incorporação de um

  8. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    Science.gov (United States)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  9. Construct validation of health-relevant personality traits: interpersonal circumplex and five-factor model analyses of the Aggression Questionnaire.

    Science.gov (United States)

    Gallo, L C; Smith, T W

    1998-01-01

    The general literature on personality traits as risk factors for physical illness--as well as the specific literature on health consequences of anger, hostility, and aggressive behavior--often suffers from incomplete or inconsistent construct validation of personality measures. This study illustrates the utility of two conceptual tools in this regard--the five-factor model and the interpersonal circumplex. The similarities and differences among anger, hostility, verbal aggressiveness, and physical aggressiveness as measured by the Buss and Perry (1992) Aggression Questionnaire were identified. Results support the interpretation of anger and hostility as primarily reflecting neurotic hostility and antagonistic hostility to a lesser extent. In contrast, verbal and physical aggressiveness can he seen as primarily reflecting antagonistic hostility, and to a lesser extent neurotic hostility. Further, verbal aggressiveness was associated with hostile dominance, whereas hostility was associated with hostile submissiveness. These findings identify potentially important distinctions among these related constructs and illustrate the potential integrative value of standard validation procedures.

  10. Analysing the influence of FSP process parameters on IGC susceptibility of AA5083 using Sugeno – Fuzzy model

    Science.gov (United States)

    Jayakarthick, C.; Povendhan, A. P.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Aluminium alloy AA5083 was friction stir processed to improve the intergranular corrosion (IGC) resistance. FSP trials were performed by varying the process parameters as per Taguchi’s L18 orthogonal array. IGC resistance of the friction stir processed specimens were found by immersing them in concentrated nitric acid and measuring the mass loss per unit area. Results indicate that dispersion and partial dissolution of secondary phase increased IGC resistance of the friction stir processed specimens. A Sugeno fuzzy model was developed to study the effect of FSP process parameters on the IGC susceptibility of friction stir processed specimens. Tool Rotation Speed, Tool Traverse Speed and Shoulder Diameter have a significant effect on the IGC susceptibility of the friction stir processed specimens.

  11. Study of interactions between metal ions and protein model compounds by energy decomposition analyses and the AMOEBA force field

    Science.gov (United States)

    Jing, Zhifeng; Qi, Rui; Liu, Chengwen; Ren, Pengyu

    2017-10-01

    The interactions between metal ions and proteins are ubiquitous in biology. The selective binding of metal ions has a variety of regulatory functions. Therefore, there is a need to understand the mechanism of protein-ion binding. The interactions involving metal ions are complicated in nature, where short-range charge-penetration, charge transfer, polarization, and many-body effects all contribute significantly, and a quantitative description of all these interactions is lacking. In addition, it is unclear how well current polarizable force fields can capture these energy terms and whether these polarization models are good enough to describe the many-body effects. In this work, two energy decomposition methods, absolutely localized molecular orbitals and symmetry-adapted perturbation theory, were utilized to study the interactions between Mg2+/Ca2+ and model compounds for amino acids. Comparison of individual interaction components revealed that while there are significant charge-penetration and charge-transfer effects in Ca complexes, these effects can be captured by the van der Waals (vdW) term in the AMOEBA force field. The electrostatic interaction in Mg complexes is well described by AMOEBA since the charge penetration is small, but the distance-dependent polarization energy is problematic. Many-body effects were shown to be important for protein-ion binding. In the absence of many-body effects, highly charged binding pockets will be over-stabilized, and the pockets will always favor Mg and thus lose selectivity. Therefore, many-body effects must be incorporated in the force field in order to predict the structure and energetics of metalloproteins. Also, the many-body effects of charge transfer in Ca complexes were found to be non-negligible. The absorption of charge-transfer energy into the additive vdW term was a main source of error for the AMOEBA many-body interaction energies.

  12. Expression Analyses of ABCDE Model Genes and Changes in Levels of Endogenous Hormones in Chinese Cabbage Exhibiting Petal-Loss

    Directory of Open Access Journals (Sweden)

    Chuan MENG

    2017-07-01

    Full Text Available Abnormal formation of floral organs affects plant reproduction and can directly interfere with the progress of breeding programs. Using PCR amplification, ABCDE model genes BraAP2, BraAP3, BraPI, BraAG, BraSHP, and BraSEP were isolated from Chinese cabbage (Brassica rapa L. ssp. pekinensis. We examined six development stages of floral buds collected from Chinese cabbage and compared between a line demonstrating normal flowering (A-8 and two mutated lines that exhibited plants having petal-loss (A-16 and A-17. The expression of ABCDE model genes has been analyzed by qRT-PCR. Compared with flower buds of petal-loss plants and normal plants, the expression of A-class gene BraAP2 was significantly decreased during the first to fourth stages, C-class gene BraAG expression was significantly decreased during the first to fifth stages, and D-class gene BraSHP expression was significantly decreased during the first to third stages. Furthermore, B-class gene BraAP3 and BraPI and E-class gene BraSEP expressions were significantly decreased during all six stages of petal-loss plants compared with normal plants. Enzyme-linked immunosorbent assays detected nine endogenous phytohormones during all stages examined here. Except for the second-stage and third-stage buds, levels of the auxin IAA and cytokinin dhZR were always higher in the petal-loss plants than the normal plants at corresponding time points. Meanwhile, concentrations of GA1+3 at the first, fourth, and fifth stages were higher in the petal-loss plants than in the normal plants. Our results provide a theoretical basis for future exploration of the molecular mechanism that determines petal loss and the effects that hormones have on such development in Chinese cabbage plants.

  13. The Impacts of the Quality of the Environment and Neighbourhood Affluence on Housing Prices: A Three-Level Hierarchical Linear Model Approach

    OpenAIRE

    Chun-Chang Lee; Hui-Yu Lin

    2014-01-01

    This paper employs a three-level hierarchical linear model (HLM) to examine the impacts that the quality of the environment and neighbourhood affluence have on housing prices. The empirical results suggest that there are significant variations in the average housing price for different neighbourhoods and administrative districts. The impact of building characteristics on housing prices is subject to the moderating effects of the characteristic variables of different levels. The quality of the...

  14. On the Structure of Personality Disorder Traits: Conjoint Analyses of the CAT-PD, PID-5, and NEO-PI-3 Trait Models

    Science.gov (United States)

    Wright, Aidan G.C.; Simms, Leonard J.

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012) and NEO Personality Inventory-3 First Half (NEI-PI-3FH; McCrae & Costa, 2007), and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (N = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms. PMID:24588061

  15. On the structure of personality disorder traits: conjoint analyses of the CAT-PD, PID-5, and NEO-PI-3 trait models.

    Science.gov (United States)

    Wright, Aidan G C; Simms, Leonard J

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the Diagnostic and Statistical Manual of Mental Disorders, fifth edition and NEO Personality Inventory-3 First Half, and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (n = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms.

  16. Cone beam computed tomography and periapical lesions: a systematic review analysing studies on diagnostic efficacy by a hierarchical model.

    Science.gov (United States)

    Kruse, C; Spin-Neto, R; Wenzel, A; Kirkevang, L-L

    2015-09-01

    To evaluate using a systematic review approach the diagnostic efficacy of CBCT for periapical lesions, focusing on the evidence level of the included studies using a six-tiered hierarchical model. The MEDLINE bibliographic database was searched from 2000 to July 2013 for studies evaluating the potential of CBCT imaging in the diagnosis and planning of treatment for periapical lesions. The search strategy was limited to English language publications using the following combined terms in the search strategy: apical pathology or endodontic pathology or periapical or lesion or healing and CBCT or cone beam CT. The diagnostic efficacy level of the studies was assessed independently by four reviewers. The search identified 25 publications that qualitatively or quantitatively assessed the use of CBCT for the diagnosis of periapical lesions, in which the methodology/results comprised at least one of the following parameters: the methods, the imaging protocols or qualitative/quantitative information on how CBCT influenced the diagnosis and/or treatment plan. From the assessed studies, it can be concluded that although there is a tendency for a higher accuracy for periapical lesion detection using CBCT compared to two-dimensional imaging methods, no studies have been conducted that justify the standard use of CBCT in diagnosing periapical lesions. In addition, it should be considered that, at the present time, the efficacy of CBCT as the diagnostic imaging method for periapical lesions has been assessed merely at low diagnostic efficacy levels. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  17. Molecular and Genetic Analyses of Collagen Type IV Mutant Mouse Models of Spontaneous Intracerebral Hemorrhage Identify Mechanisms for Stroke Prevention.

    Science.gov (United States)

    Jeanne, Marion; Jorgensen, Jeff; Gould, Douglas B

    2015-05-05

    Collagen type IV alpha1 (COL4A1) and alpha2 (COL4A2) form heterotrimers critical for vascular basement membrane stability and function. Patients with COL4A1 or COL4A2 mutations suffer from diverse cerebrovascular diseases, including cerebral microbleeds, porencephaly, and fatal intracerebral hemorrhage (ICH). However, the pathogenic mechanisms remain unknown, and there is a lack of effective treatment. Using Col4a1 and Col4a2 mutant mouse models, we investigated the genetic complexity and cellular mechanisms underlying the disease. We found that Col4a1 mutations cause abnormal vascular development, which triggers small-vessel disease, recurrent hemorrhagic strokes, and age-related macroangiopathy. We showed that allelic heterogeneity, genetic context, and environmental factors such as intense exercise or anticoagulant medication modulated disease severity and contributed to phenotypic heterogeneity. We found that intracellular accumulation of mutant collagen in vascular endothelial cells and pericytes was a key triggering factor of ICH. Finally, we showed that treatment of mutant mice with a US Food and Drug Administration-approved chemical chaperone resulted in a decreased collagen intracellular accumulation and a significant reduction in ICH severity. Our data are the first to show therapeutic prevention in vivo of ICH resulting from Col4a1 mutation and imply that a mechanism-based therapy promoting protein folding might also prevent ICH in patients with COL4A1 and COL4A2 mutations. © 2015 American Heart Association, Inc.

  18. EXPERIMENTAL DATA, THERMODYNAMIC MODELING AND SENSITIVITY ANALYSES FOR THE PURIFICATION STEPS OF ETHYL BIODIESEL FROM FODDER RADISH OIL PRODUCTION

    Directory of Open Access Journals (Sweden)

    R. C. Basso

    Full Text Available Abstract The goals of this work were to present original liquid-liquid equilibrium data of the system containing glycerol + ethanol + ethyl biodiesel from fodder radish oil, including the individual distribution of each ethyl ester; to adjust binary parameters of the NRTL; to compare NRTL and UNIFAC-Dortmund in the LLE representation of the system containing glycerol; to simulate different mixer/settler flowsheets for biodiesel purification, evaluating the ratio water/biodiesel used. In thermodynamic modeling, the deviations between experimental data and calculated values were 0.97% and 3.6%, respectively, using NRTL and UNIFAC-Dortmund. After transesterification, with 3 moles of excess ethanol, removal of this component until a content equal to 0.08 before an ideal settling step allows a glycerol content lower than 0.02% in the ester-rich phase. Removal of ethanol, glycerol and water from biodiesel can be performed with countercurrent mixer/settler, using 0.27% of water in relation to the ester amount in the feed stream.

  19. Modeling analyses of two-phase flow instabilities for straight and helical tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Dong, Ruiting; Niu, Fenglei; Zhou, Yuan; Yu, Yu; Guo, Zhangpeng

    2016-01-01

    Highlights: • Two-phase flow instabilities in straight and helical tubes were studied. • The effects of system pressure, mass flux, inlet subcooling on DWO were studied. • The simulation results are consistent with the experimental results. • The RELAP5 results are consistent with frequency domain method results. - Abstract: The effects of system pressure, mass flux and inlet subcooling on two-phase flow instability for the test section consisted of two heated straight channels or two helical channels are studied by means of RELAP5/MOD3.3 and multi-variable frequency domain control theory. The experimental data in two straight channels are used to verify the RELAP5 and multi-variable frequency domain control theory results. The thermal hydraulic behaviors and parametric effects are simulated and compared with the experimental data. The RELAP5 results show that the flow stability increases with the system pressure, mass velocity, and inlet subcooling at high subcoolings. The frequency domain theory presents the same results as those given by the time domain theory (RELAP5). The effects of system pressure, mass velocity and inlet subcooling are simulated to find the difference between the straight and the helical tube flows. The RELAP5 and the multi-variable frequency domain control theory are used in modeling and simulating density wave oscillation to study their advantages and disadvantages in straight and helical tubes.

  20. mRNA and microRNA transcriptomics analyses in a murine model of dystrophin loss and therapeutic restoration

    Directory of Open Access Journals (Sweden)

    Thomas C. Roberts

    2016-03-01

    Full Text Available Duchenne muscular dystrophy (DMD is a pediatric, X-linked, progressive muscle-wasting disorder caused by loss of function mutations affecting the gene encoding the dystrophin protein. While the primary genetic insult in DMD is well described, many details of the molecular and cellular pathologies that follow dystrophin loss are incompletely understood. To investigate gene expression in dystrophic muscle we have applied mRNA and microRNA (miRNA microarray technology to the mdx mouse model of DMD. This study was designed to generate a complete description of gene expression changes associated with dystrophic pathology and the response to an experimental therapy which restores dystrophin protein function. These datasets have enabled (1 the determination of gene expression changes associated with dystrophic pathology, (2 identification of differentially expressed genes that are restored towards wild-type levels after therapeutic dystrophin rescue, (3 investigation of the correlation between mRNA and protein expression (determined by parallel mass spectrometry proteomics analysis, and (4 prediction of pathology associated miRNA-target interactions. Here we describe in detail how the data were generated including the basic analysis as contained in the manuscript published in Human Molecular Genetics with PMID 26385637. The data have been deposited in the Gene Expression Omnibus (GEO with the accession number GSE64420.

  1. A tool to analyse gender mainstreaming and care-giving models in support plans for informal care: case studies in Andalusia and the United Kingdom.

    Science.gov (United States)

    García-Calvente, María Mar; Castaño-López, Esther; Mateo-Rodríguez, Inmaculada; Maroto-Navarro, Gracia; Ruiz-Cantero, María Teresa

    2007-12-01

    To present a tool to analyse the design of support plans for informal care from a gender perspective, using the plans in Andalusia and the United Kingdom as case studies. A tool was drawn up to analyse gender mainstreaming and care-giving models involved in the documents. In the gender mainstreaming aspect, a symbolic dimension (gender mainstreaming in the plan's theoretical framework and analysis of situation) and an operational dimension (gender mainstreaming in the plan's proposals and actions) were defined. Four care-giving models were analysed using the following categories: the plan's definition of carer, focal point of interest, objectives and acknowledgement or otherwise of conflict of interests. A qualitative discourse analysis methodology was used. The analysis tool used shows that the plans do not incorporate gender mainstreaming systematically, but there are interesting aspects from a gender perspective that are present at both a symbolic and an operational level. Both plans use a combination of care-giving models, but the model for superseding informal care is not included in either plan. The proposed tool proved useful for the examination of the gender perspective in the formulation of the plans selected for analysis. Both plans introduce measures to improve the quality of life of informal carers. However, gender mainstreaming also implies interventions that will change situations of sexual inequality and injustice that occur in informal care in the long term. Likewise, aspects of feminist theory must be considered in order to draw up plans and policies that are sensitive to informal care and the emancipation of women carers.

  2. Transport of carboxyl-functionalized carbon black nanoparticles in saturated porous media: Column experiments and model analyses.

    Science.gov (United States)

    Kang, Jin-Kyu; Yi, In-Geol; Park, Jeong-Ann; Kim, Song-Bae; Kim, Hyunjung; Han, Yosep; Kim, Pil-Je; Eom, Ig-Chun; Jo, Eunhye

    2015-01-01

    The aim of this study was to investigate the transport behavior of carboxyl-functionalized carbon black nanoparticles (CBNPs) in porous media including quartz sand, iron oxide-coated sand (IOCS), and aluminum oxide-coated sand (AOCS). Two sets of column experiments were performed under saturated flow conditions for potassium chloride (KCl), a conservative tracer, and CBNPs. Breakthrough curves were analyzed to obtain mass recovery and one-dimensional transport model parameters. The first set of experiments was conducted to examine the effects of metal (Fe, Al) oxides and flow rate (0.25 and 0.5 mL min(-1)) on the transport of CBNPs suspended in deionized water. The results showed that the mass recovery of CBNPs in quartz sand (flow rate=0.5 mL min(-1)) was 83.1%, whereas no breakthrough of CBNPs (mass recovery=0%) was observed in IOCS and AOCS at the same flow rate, indicating that metal (Fe, Al) oxides can play a significant role in the attachment of CBNPs to porous media. In addition, the mass recovery of CBNPs in quartz sand decreased to 76.1% as the flow rate decreased to 0.25 mL min(-1). Interaction energy profiles for CBNP-porous media were calculated using DLVO theory for sphere-plate geometry, demonstrating that the interaction energy for CBNP-quartz sand was repulsive, whereas the interaction energies for CBNP-IOCS and CBNP-AOCS were attractive with no energy barriers. The second set of experiments was conducted in quartz sand to observe the effect of ionic strength (NaCl=0.1 and 1.0mM; CaCl2=0.01 and 0.1mM) and pH (pH=4.5 and 5.4) on the transport of CBNPs suspended in electrolyte. The results showed that the mass recoveries of CBNPs in NaCl=0.1 and 1.0mM were 65.3 and 6.4%, respectively. The mass recoveries of CBNPs in CaCl2=0.01 and 0.1mM were 81.6 and 6.3%, respectively. These results demonstrated that CBNP attachment to quartz sand can be enhanced by increasing the electrolyte concentration. Interaction energy profiles demonstrated that the

  3. Genetic and functional analyses of SHANK2 mutations suggest a multiple hit model of autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Claire S Leblond

    2012-02-01

    Full Text Available Autism spectrum disorders (ASD are a heterogeneous group of neurodevelopmental disorders with a complex inheritance pattern. While many rare variants in synaptic proteins have been identified in patients with ASD, little is known about their effects at the synapse and their interactions with other genetic variations. Here, following the discovery of two de novo SHANK2 deletions by the Autism Genome Project, we identified a novel 421 kb de novo SHANK2 deletion in a patient with autism. We then sequenced SHANK2 in 455 patients with ASD and 431 controls and integrated these results with those reported by Berkel et al. 2010 (n = 396 patients and n = 659 controls. We observed a significant enrichment of variants affecting conserved amino acids in 29 of 851 (3.4% patients and in 16 of 1,090 (1.5% controls (P = 0.004, OR = 2.37, 95% CI = 1.23-4.70. In neuronal cell cultures, the variants identified in patients were associated with a reduced synaptic density at dendrites compared to the variants only detected in controls (P = 0.0013. Interestingly, the three patients with de novo SHANK2 deletions also carried inherited CNVs at 15q11-q13 previously associated with neuropsychiatric disorders. In two cases, the nicotinic receptor CHRNA7 was duplicated and in one case the synaptic translation repressor CYFIP1 was deleted. These results strengthen the role of synaptic gene dysfunction in ASD but also highlight the presence of putative modifier genes, which is in keeping with the "multiple hit model" for ASD. A better knowledge of these genetic interactions will be necessary to understand the complex inheritance pattern of ASD.

  4. A high-resolution and harmonized model approach for reconstructing and analysing historic land changes in Europe

    Science.gov (United States)

    Fuchs, R.; Herold, M.; Verburg, P. H.; Clevers, J. G. P. W.

    2013-03-01

    Human-induced land use changes are nowadays the second largest contributor to atmospheric carbon dioxide after fossil fuel combustion. Existing historic land change reconstructions on the European scale do not sufficiently meet the requirements of greenhouse gas (GHG) and climate assessments, due to insufficient spatial and thematic detail and the consideration of various land change types. This paper investigates if the combination of different data sources, more detailed modelling techniques, and the integration of land conversion types allow us to create accurate, high-resolution historic land change data for Europe suited for the needs of GHG and climate assessments. We validated our reconstruction with historic aerial photographs from 1950 and 1990 for 73 sample sites across Europe and compared it with other land reconstructions like Klein Goldewijk et al. (2010, 2011), Ramankutty and Foley (1999), Pongratz et al. (2008) and Hurtt et al. (2006). The results indicate that almost 700 000 km2 (15.5%) of land cover in Europe has changed over the period 1950-2010, an area similar to France. In Southern Europe the relative amount was almost 3.5% higher than average (19%). Based on the results the specific types of conversion, hot-spots of change and their relation to political decisions and socio-economic transitions were studied. The analysis indicates that the main drivers of land change over the studied period were urbanization, the reforestation program resulting from the timber shortage after the Second World War, the fall of the Iron Curtain, the Common Agricultural Policy and accompanying afforestation actions of the EU. Compared to existing land cover reconstructions, the new method considers the harmonization of different datasets by achieving a high spatial resolution and regional detail with a full coverage of different land categories. These characteristics allow the data to be used to support and improve ongoing GHG inventories and climate research.

  5. Childhood Leukaemia Incidence in Hungary, 1973-2002. Interpolation Model for Analysing the Possible Effects of the Chernobyl Accident

    International Nuclear Information System (INIS)

    Toeroek, Szabolcs; Borgulya, Gabor; Lobmayer, Peter; Jakab, Zsuzsanna; Schuler, Dezsoe; Fekete, Gyoergy

    2005-01-01

    The incidence of childhood leukaemia in Hungary has yet to be reported, although data are available since the early 70s. The Hungarian data therefore cover the time before and after the Chernobyl nuclear accident (1986). The aim of this study was to assess the effects of the Chernobyl accident on childhood leukaemia incidence in Hungary. A population-based study was carried out using data of the National Paediatric Cancer Registry of Hungary from 1973 to 2002. The total number of cases was 2204. To test the effect of the Chernobyl accident the authors applied a new approach called 'Hypothesized Impact Period Interpolation'-model, which takes into account the increasing trend of childhood leukaemia incidence and the hypothesized exposure and latency times. The incidence of leukaemia in the age group 0-14 varied between 33.2 and 39.4 per million person-years along the observed 30 year period, and the incidence of childhood leukaemia showed a moderate increase of 0.71% annually (p=0.0105). In the period of the hypothesized impact of the Chernobyl accident the incidence rate was elevated by 2.5% (95% CI: -8.1%; +14.3%), but this change was not statistically significant (p=0.663). The age standardised incidence, the age distribution, the gender ratio, and the magnitude of increasing trend of childhood leukaemia incidence in Hungary were similar to other European countries. Applying the presented interpolation method the authors did not find a statistically significant increase in the leukaemia incidence in the period of the hypothesized impact of the Chernobyl accident

  6. Equine ANXA2 and MMP1 expression analyses in an experimental model of normal and pathological wound repair.

    Science.gov (United States)

    Miragliotta, Vincenzo; Lefebvre-Lavoie, Josiane; Lussier, Jacques G; Theoret, Christine L

    2008-08-01

    Wounds on horse limbs can develop exuberant granulation tissue which resembles the human keloid. Clues gained from the study of over-scarring in horses might help control fibro-proliferative disorders. The aim of the present study was to clone full-length equine ANXA2 cDNA then to study spatio-temporal expression of ANXA2 and MMP1 mRNA and protein, potential contributors to remodeling, during repair of body (normal) and limb (fibro-proliferative) wounds in an established horse wound model. Cloning of ANXA2 was achieved by screening size-selected cDNA libraries. Expression was studied in intact skin and in biopsies of 1, 2, 3, 4 and 6-week-old wounds of the body and limb. Temporal gene expression was determined by semi-quantitative RT-PCR while protein expression was mapped immunohistochemically. ANXA2 mRNA was up-regulated only in body wounds, corroborating the superior and prompt tissue turnover at this location. Immunohistochemistry partially substantiated the mRNA data in that increased staining for ANXA2 protein was detected in neo-epidermis which formed more rapidly and completely in body wounds. MMP1 mRNA levels in body wounds significantly surpassed those of limb wounds in week one biopsies. The protein was abundant in migrating epithelium of limb wounds at weeks two and four; conversely, body wounds in which epithelialization was near complete showed diminished staining of MMP1. We conclude that ANXA2 and MMP1 might participate in remodeling during wound healing in the horse, and that differences in expression may contribute to the excessive proliferative response seen in the limb.

  7. The AquaDEB project: Physiological flexibility of aquatic animals analysed with a generic dynamic energy budget model (phase II)

    Science.gov (United States)

    Alunno-Bruscia, Marianne; van der Veer, Henk W.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    This second special issue of the Journal of Sea Research on development and applications of Dynamic Energy Budget (DEB) theory concludes the European Research Project AquaDEB (2007-2011). In this introductory paper we summarise the progress made during the running time of this 5 years' project, present context for the papers in this volume and discuss future directions. The main scientific objectives in AquaDEB were (i) to study and compare the sensitivity of aquatic species (mainly molluscs and fish) to environmental variability within the context of DEB theory for metabolic organisation, and (ii) to evaluate the inter-relationships between different biological levels (individual, population, ecosystem) and temporal scales (life cycle, population dynamics, evolution). AquaDEB phase I focussed on quantifying bio-energetic processes of various aquatic species ( e.g. molluscs, fish, crustaceans, algae) and phase II on: (i) comparing of energetic and physiological strategies among species through the DEB parameter values and identifying the factors responsible for any differences in bioenergetics and physiology; (ii) considering different scenarios of environmental disruption (excess of nutrients, diffuse or massive pollution, exploitation by man, climate change) to forecast effects on growth, reproduction and survival of key species; (iii) scaling up the models for a few species from the individual level up to the level of evolutionary processes. Apart from the three special issues in the Journal of Sea Research — including the DEBIB collaboration (see vol. 65 issue 2), a theme issue on DEB theory appeared in the Philosophical Transactions of the Royal Society B (vol 365, 2010); a large number of publications were produced; the third edition of the DEB book appeared (2010); open-source software was substantially expanded (over 1000 functions); a large open-source systematic collection of ecophysiological data and DEB parameters has been set up; and a series of DEB

  8. Development of model for analysing respective collections of intended hematopoietic stem cells and harvests of unintended mature cells in apheresis for autologous hematopoietic stem cell collection.

    Science.gov (United States)

    Hequet, O; Le, Q H; Rodriguez, J; Dubost, P; Revesz, D; Clerc, A; Rigal, D; Salles, G; Coiffier, B

    2014-04-01

    Hematopoietic stem cells (HSCs) required to perform peripheral hematopoietic autologous stem cell transplantation (APBSCT) can be collected by processing several blood volumes (BVs) in leukapheresis sessions. However, this may cause granulocyte harvest in graft and decrease in patient's platelet blood level. Both consequences may induce disturbances in patient. One apheresis team's current purpose is to improve HSC collection by increasing HSC collection and prevent increase in granulocyte and platelet harvests. Before improving HSC collection it seemed important to know more about the way to harvest these types of cells. The purpose of our study was to develop a simple model for analysing respective collections of intended CD34+ cells among HSC (designated here as HSC) and harvests of unintended platelets or granulocytes among mature cells (designated here as mature cells) considering the number of BVs processed and factors likely to influence cell collection or harvest. For this, we processed 1, 2 and 3 BVs in 59 leukapheresis sessions and analysed corresponding collections and harvests with a referent device (COBE Spectra). First we analysed the amounts of HSC collected and mature cells harvested and second the evolution of the respective shares of HSC and mature cells collected or harvested throughout the BV processes. HSC collections and mature cell harvests increased globally (pcells and platelets) influenced both cell collections and harvests (CD34+cells and platelets) (pHSC collections and mature unintended cells harvests (pHSC collections or unintended mature cell harvests were pre-leukapheresis blood cell levels. Our model was meant to assist apheresis teams in analysing shares of HSC collected and mature cells harvested with new devices or with new types of HSC mobilization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Faire territoire au quotidien dans les grands ensembles HLM

    Directory of Open Access Journals (Sweden)

    Denis la Mache

    2012-05-01

    Full Text Available Cet article propose une lecture anthropologique de la manière dont les habitants des grands ensembles de périphéries urbaines délimitent, administrent, transforment matériellement et symboliquement les espaces et les lieux de leur quotidien pour faire territoire. Nous nous intéresserons à la fabrication de ces entités spatiales dont chaque individu se donne la liberté de disposer chaque jour selon un usage singulier et qu’il entoure d’un champ symbolique spécifique, garant d’identité. Ces « fabrications sociospatiales » seront abordées à partir d’une recherche empirique menée auprès d’habitants de deux terrains d’enquêtes situés dans des périphéries de villes moyennes.This paper proposes an anthropological reading of how the inhabitants of large urban peripheries define, administer, and process their daily spaces and places from a material as much as symbolical point of view to give sense to their territory. We will focus on the making of these spatial entities, witch everyone can dispose of everyday individually, to guarantee their identity. These “socio spatial creations” will be based on an empirical research, that is to say surveys conducted among the suburbians of two towns.

  10. Analysing urban resilience through alternative stormwater management options: application of the conceptual Spatial Decision Support System model at the neighbourhood scale.

    Science.gov (United States)

    Balsells, M; Barroca, B; Amdal, J R; Diab, Y; Becue, V; Serre, D

    2013-01-01

    Recent changes in cities and their environments, caused by rapid urbanisation and climate change, have increased both flood probability and the severity of flooding. Consequently, there is a need for all cities to adapt to climate and socio-economic changes by developing new strategies for flood risk management. Following a risk paradigm shift from traditional to more integrated approaches, and considering the uncertainties of future urban development, one of the main emerging tasks for city managers becomes the development of resilient cities. However, the meaning of the resilience concept and its operability is still not clear. The goal of this research is to study how urban engineering and design disciplines can improve resilience to floods in urban neighbourhoods. This paper presents the conceptual Spatial Decision Support System (DS3) model which we consider a relevant tool to analyse and then implement resilience into neighbourhood design. Using this model, we analyse and discuss alternative stormwater management options at the neighbourhood scale in two specific areas: Rotterdam and New Orleans. The results obtained demonstrate that the DS3 model confirmed in its framework analysis that stormwater management systems can positively contribute to the improved flood resilience of a neighbourhood.

  11. How can results from macro economic analyses of the energy consumption of households be used in macro models? A discussion of theoretical and empirical literature about aggregation

    International Nuclear Information System (INIS)

    Halvorsen, Bente; Larsen, Bodil M.; Nesbakken, Runa

    2001-01-01

    The literature on energy demand shows that there are systematic differences in income- and price elasticity from analyses based on macro data and micro data. Even if one estimates models with the same explanatory variables, the results may differ with respect to estimated price- and income sensitivity. These differences may be caused by problems involved in transferring micro properties to macro properties, or the estimated macro relationships have failed to adequately consideration the fact that households behave differently in their energy demand. Political goals are often directed towards the entire household sector. Partial equilibrium models do not capture important equilibrium effects and feedback through the energy markets and the economy in general. Thus, it is very interesting, politically and scientifically, to do macro economic model analyses of different political measures that affect the energy consumption. The results of behavioural analyses, in which one investigates the heterogeneity of the energy demand, must be based on information about individual households. When the demand is studied based on micro data, it is difficult to aggregate its properties to a total demand function for the entire household sector if different household sectors have different behaviour. Such heterogeneity of behaviour may for instance arise when households in different regions have different heating equipment because of regional differences in the price of electricity. The subject of aggregation arises immediately when one wants to draw conclusions about the household sector based on information about individual households, whether the discussion is about the whole population or a selection of households. Thus, aggregation is a topic of interest in a wide range of problems

  12. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  13. Discrepancies in anthropometric parameters between different models affect intervertebral rotations when loading finite element models with muscle forces from inverse static analyses.

    Science.gov (United States)

    Zhu, Rui; Rohlmann, Antonius

    2014-06-01

    In only a few published finite element (FE) simulations have muscle forces been applied to the spine. Recently, muscle forces determined using an inverse static (IS) model of the spine were transferred to a spinal FE model, and the effect of methodical parameters was investigated. However, the sensitivity of anthropometric differences between FE and IS models, such as body height and spinal orientation, was not considered. The aim of this sensitivity study was to determine the influence of those differences on the intervertebral rotations (IVRs) following the transfer of muscle forces from an IS model to a FE model. Muscle forces were estimated for 20° flexion and 10° extension of the upper body using an inverse static musculoskeletal model. These forces were subsequently transferred to a nonlinear FE model of the spino-pelvic complex, which includes 243 muscle fascicles. Deviations of body height (±10 cm), spinal orientation in the sagittal plane (±10°), and body weight (±10 kg) between both models were intentionally generated, and their influences on IVRs were determined. The changes in each factor relative to their corresponding reference value of the IS model were calculated. Deviations in body height, spinal orientation, and body weight resulted in maximum changes in the IVR of 19.2%, 26% and 4.2%, respectively, relative to T12-S1 IVR. When transferring muscle forces from an IS to a FE model, it is crucial that both models have the same spinal orientation and height. Additionally, the body weight should be equal in both models.

  14. Contribution of JMA to the WMO Technical Task Team on meteorological analyses for Fukushima Daiichi Nuclear Power Plant accident and relevant atmospheric transport modeling at MRI

    International Nuclear Information System (INIS)

    Saito, Kazuo; Shimbori, Toshiki; Kato, Teruyuki; Kajino, Mizuo; Sekiyama, Tsuyoshi T.; Tanaka, Taichu Y.; Maki, Takashi; Draxler, Roland; Hara, Tabito; Toyoda, Eizi; Honda, Yuki; Nagata, Kazuhiko; Fujita, Tsukasa; Sakamoto, Masami; Terada, Hiroaki; Chino, Masamichi

    2015-01-01

    The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) was asked to produce a scientific report for the General Assembly on the levels and effects of radiation exposure caused by the accident at the Fukushima Daiichi Nuclear Power Plant, and UNSCEAR requested the World Meteorological Organization (WMO) to develop a set of meteorological analyses for assessing the atmospheric transport, dispersion, and deposition of radioactive materials. In response to UNSCEAR's request, the WMO's Commission for Basic Systems convened a technical task team of experts from five countries (Austria, Canada, Japan, United Kingdom, and the United States) in November 2011. The primary aim of this team was to examine how the use of meteorological analyses could improve atmospheric transport, dispersion, and deposition model (ATDM) calculations. As the Regional Specialized Meteorological Center of the country in which the accident occurred, the Japan Meteorological Agency (JMA) collaborated with the WMO Task Team by providing its mesoscale analysis based on operational four-dimensional variational data assimilation and radar/rain gauge-analyzed precipitation (RAP) data in the standard WMO format (GRIB2). To evaluate the quality of the meteorological analyses, the WMO Task Team conducted test simulations with their regional ATDMs and different meteorological analyses. JMA developed a regional ATDM for radionuclides by modifying its operational regional atmospheric transport model, which had been previously used for photochemical oxidant predictions and volcanic ashfall forecasts. The modified model (hereafter referred to as JMA-RATM) newly implemented dry deposition, wet scavenging, and gravitational settling of radionuclide aerosol particles. The preliminary and revised calculations of JMA-RATM were conducted with a horizontal concentration and deposition grid resolution of 5 km and a unit source emission rate, in accordance with the Task Team

  15. Navigating Scaling: Modelling and Analysing

    Science.gov (United States)

    2005-01-07

    r( t) FRACTIONAL BROWNIAN MOTION IN MULTIFRACTAL TIME: VH(t) = BH(A(t)), IE|VH(t+ aτ0)− VH(t)|q = cq|a|qH+ϕ(qH), 0 1 2 3 4 5 6 7 8 9 10 11 -100 -50 0...OF THE STEPS: - A1: Stationary, - A2: Independent, - A3: Gaussian, ⇒ Ordinary Random Walk, Ordinary Brownian Motion , ⇒ IEX(t)2 = 2D|t|, Einstein...Pipiras, R. Riedi, S. Jaffard Wavelet And Multifractal Analysis, Cargèse, France, July 2004. Report Documentation Page Form ApprovedOMB No. 0704

  16. Distribution of Nitrosomonas europaea and Nitrobacter winogradskyi in an autotrophic nitrifying biofilm reactor as depicted by molecular analyses and mathematical modelling.

    Science.gov (United States)

    Montràs, Anna; Pycke, Benny; Boon, Nico; Gòdia, Francesc; Mergeay, Max; Hendrickx, Larissa; Pérez, Julio

    2008-03-01

    The autotrophic two-species biofilm from the packed bed reactor of a life-support system, containing Nitrosomonas europaea ATCC 19718 and Nitrobacter winogradskyi ATCC 25391, was analysed after 4.8 years of continuous operation performing complete nitrification. Real-time quantitative polymerase chain reaction (Q-PCR) was used to quantify N. europaea and N. winogradskyi along the vertical axis of the reactor, revealing a spatial segregation of N. europaea and N. winogradskyi. The main parameters influencing the spatial segregation of both nitrifiers along the bed were assessed through a multi-species one-dimensional biofilm model generated with AQUASIM software. The factor that contributed the most to this distribution profile was a small deviation from the flow pattern of a perfectly mixed tank towards plug-flow. The results indicate that the model can estimate the impact of specific biofilm parameters and predict the nitrification efficiency and population dynamics of a multispecies biofilm.

  17. Mitochondrial dysfunction, oxidative stress and apoptosis revealed by proteomic and transcriptomic analyses of the striata in two mouse models of Parkinson’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Mark H.; Qian, Weijun; Wang, Haixing; Petyuk, Vladislav A.; Bloom, Joshua S.; Sforza, Daniel M.; Lacan, Goran; Liu, Dahai; Khan, Arshad H.; Cantor, Rita M.; Bigelow, Diana J.; Melega, William P.; Camp, David G.; Smith, Richard D.; Smith, Desmond J.

    2008-02-10

    The molecular mechanisms underlying the changes in the nigrostriatal pathway in Parkinson disease (PD) are not completely understood. Here we use mass spectrometry and microarrays to study the proteomic and transcriptomic changes in the striatum of two mouse models of PD, induced by the distinct neurotoxins 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and methamphetamine (METH). Proteomic analyses resulted in the identification and relative quantification of 912 proteins with two or more unique peptides and 85 proteins with significant abundance changes following neurotoxin treatment. Similarly, microarray analyses revealed 181 genes with significant changes in mRNA following neurotoxin treatment. The combined protein and gene list provides a clearer picture of the potential mechanisms underlying neurodegeneration observed in PD. Functional analysis of this combined list revealed a number of significant categories, including mitochondrial dysfunction, oxidative stress response and apoptosis. Additionally, codon usage and miRNAs may play an important role in translational control in the striatum. These results constitute one of the largest datasets integrating protein and transcript changes for these neurotoxin models with many similar endpoint phenotypes but distinct mechanisms.

  18. Gauss or Bernoulli? A Monte Carlo Comparison of the Performance of the Linear Mixed-Model and the Logistic Mixed-Model Analyses in Simulated Community Trials with a Dichotomous Outcome Variable at the Individual Level.

    Science.gov (United States)

    Hannan, Peter J.; Murray, David M.

    1996-01-01

    A Monte Carlo study compared performance of linear and logistic mixed-model analyses of simulated community trials having specific event rates, intraclass correlations, and degrees of freedom. Results indicate that in studies with adequate denominator degrees of freedom, the researcher may use either method of analysis, with certain cautions. (SLD)

  19. A mechanistic model of H{sub 2}{sup 18}O and C{sup 18}OO fluxes between ecosystems and the atmosphere: Model description and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Riley, W.J.; Still, C.J.; Torn, M.S.; Berry, J.A.

    2002-01-01

    The concentration of 18O in atmospheric CO2 and H2O is a potentially powerful tracer of ecosystem carbon and water fluxes. In this paper we describe the development of an isotope model (ISOLSM) that simulates the 18O content of canopy water vapor, leaf water, and vertically resolved soil water; leaf photosynthetic 18OC16O (hereafter C18OO) fluxes; CO2 oxygen isotope exchanges with soil and leaf water; soil CO2 and C18OO diffusive fluxes (including abiotic soil exchange); and ecosystem exchange of H218O and C18OO with the atmosphere. The isotope model is integrated into the land surface model LSM, but coupling with other models should be straightforward. We describe ISOLSM and apply it to evaluate (a) simplified methods of predicting the C18OO soil-surface flux; (b) the impacts on the C18OO soil-surface flux of the soil-gas diffusion coefficient formulation, soil CO2 source distribution, and rooting distribution; (c) the impacts on the C18OO fluxes of carbonic anhydrase (CA) activity in soil and leaves; and (d) the sensitivity of model predictions to the d18O value of atmospheric water vapor and CO2. Previously published simplified models are unable to capture the seasonal and diurnal variations in the C18OO soil-surface fluxes simulated by ISOLSM. Differences in the assumed soil CO2 production and rooting depth profiles, carbonic anhydrase activity in soil and leaves, and the d18O value of atmospheric water vapor have substantial impacts on the ecosystem CO2 flux isotopic composition. We conclude that accurate prediction of C18OO ecosystem fluxes requires careful representation of H218O and C18OO exchanges and transport in soils and plants.

  20. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    Science.gov (United States)

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com

  1. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model.

    Science.gov (United States)

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-07-08

    This paper proposes a multi-level hierarchical model for the Tokay gecko ( Gekko gecko ) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko 's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko 's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid substrate, the directional adhesion behaviour of the seta has been investigated. The lamella-induced attachment and detachment have been modelled to simulate the active digital hyperextension (DH) and the digital gripping (DG) phenomena. The results suggest that a tiny angular displacement within 0.25° of the lamellar proximal end is necessary in which a fast transition from attachment to detachment or vice versa is induced. The active DH helps release the torque to induce setal non-sliding detachment, while the DG helps apply torque to make the setal adhesion stable. The lamella plays a key role in saving energy during detachment to adapt to its habitat and provides another adhesive function which differs from the friction-dependent setal adhesion system controlled by the dynamic of G. gecko 's body.

  2. Analyses of Hyperspectral and Directional Data for Agricultural Monitoring Using a Canopy Reflectance Model SLC Progress in the Upper Rhine Valley and Baasdorf Test-Sites

    Science.gov (United States)

    Bach, Heike; Begiebing, Silke; Waldmann, Daniel; Rowotzki, Britta

    2005-06-01

    CHRIS data of the years 2003 and 2004 of two agricultural test-sites in Germany were analyzed with the goal to evaluate the potential of hyperspectral and directional remote sensing data to deliver input information for precision agriculture. Multitemporal observations are available for the Upper Rhine Valley test-site along the German/French border for the year 2003.After geometric correction of the multiangular data set and atmospheric correction, the obtained reflectance spectra were compared to optical radiative transfer simulations with SLC. SLC is an extended version of the canopy reflectance model GeoSAIL.Directional reflectance spectra were extracted and the angular variations compared to the model results. BRDF analyses illustrate the spectral properties of the BRDF for different land uses. As results from the optical modeling, crop parameters like LAI and chlorophyll content are retrieved from the CHRIS data and provided as spatial maps. The directional measurements additionally contain information on the canopy structure that is crop specific, but also changes with phenological development and sometimes even with cultivars. These crop parameters will serve as input parameters for plant production and management models.

  3. Accounting for age Structure in Ponderosa Pine Ecosystem Analyses: Integrating Management, Disturbance Histories and Observations with the BIOME-BGC Model

    Science.gov (United States)

    Hibbard, K. A.; Law, B.; Thornton, P.

    2003-12-01

    Disturbance and management regimes in forested ecosystems have been recently highlighted as important factors contributing to quantification of carbon stocks and fluxes. Disturbance events, such as stand-replacing fires and current management regimes that emphasize understory and tree thinning are primary suspects influencing ecosystem processes, including net ecosystem productivity (NEP) in forests of the Pacific Northwest. Several recent analyses have compared simulated to measured component stocks and fluxes of carbon in Ponderosa Pine (Pinus ponderosa var. Laws) at 12 sites ranging from 9 to 300 years in central Oregon (Law et al. 2001, Law et al. 2003) using the BIOME-BGC model. Major emphases on ecosystem model developments include improving allocation logic, integrating ecosystem processes with disturbance such as fire and including nitrogen in biogeochemical cycling. In Law et al. (2001, 2003), field observations prompted BIOME-BGC improvements including dynamic allocation of carbon to fine root mass through the life of a stand. A sequence of simulations was also designed to represent both management and disturbance histories for each site, however, current age structure of each sites wasn't addressed. Age structure, or cohort management has largely been ignored by ecosystem models, however, some studies have sought to incorporate stand age with disturbance and management (e.g. Hibbard et al. 2003). In this analyses, we regressed tree ages against height (R2 = 0.67) to develop a proportional distribution of age structure for each site. To preserve the integrity of the comparison between Law et al. (2003) and this study, we maintained the same timing of harvest, however, based on the distribution of age structures, we manipulated the amount of removal. Harvest by Law et al. (2003) was set at stand-replacement (99%) levels to simulate clear-cutting and reflecting the average top 10% of the age in each plot. For the young sites, we set removal at 73%, 51% and

  4. SDMtoolbox 2.0: the next generation Python-based GIS toolkit for landscape genetic, biogeographic and species distribution model analyses

    Directory of Open Access Journals (Sweden)

    Jason L. Brown

    2017-12-01

    Full Text Available SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs to maximize each model’s discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have ‘universal’ analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism, generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates—to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user.

  5. Downscaling global land cover projections from an integrated assessment model for use in regional analyses: results and evaluation for the US from 2005 to 2095

    International Nuclear Information System (INIS)

    West, Tristram O; Le Page, Yannick; Wolf, Julie; Thomson, Allison M; Huang, Maoyi

    2014-01-01

    Projections of land cover change generated from integrated assessment models (IAM) and other economic-based models can be applied for analyses of environmental impacts at sub-regional and landscape scales. For those IAM and economic models that project land cover change at the continental or regional scale, these projections must be downscaled and spatially distributed prior to use in climate or ecosystem models. Downscaling efforts to date have been conducted at the national extent with relatively high spatial resolution (30 m) and at the global extent with relatively coarse spatial resolution (0.5°). We revised existing methods to downscale global land cover change projections for the US to 0.05° resolution using MODIS land cover data as the initial proxy for land class distribution. Land cover change realizations generated here represent a reference scenario and two emissions mitigation pathways (MPs) generated by the global change assessment model (GCAM). Future gridded land cover realizations are constructed for each MODIS plant functional type (PFT) from 2005 to 2095, commensurate with the community land model PFT land classes, and archived for public use. The GCAM land cover realizations provide spatially explicit estimates of potential shifts in croplands, grasslands, shrublands, and forest lands. Downscaling of the MPs indicate a net replacement of grassland by cropland in the western US and by forest in the eastern US. An evaluation of the downscaling method indicates that it is able to reproduce recent changes in cropland and grassland distributions in respective areas in the US, suggesting it could provide relevant insights into the potential impacts of socio-economic and environmental drivers on future changes in land cover. (letters)

  6. Integrative analyses of miRNA and proteomics identify potential biological pathways associated with onset of pulmonary fibrosis in the bleomycin rat model

    Energy Technology Data Exchange (ETDEWEB)

    Fukunaga, Satoki [Department of Molecular Pathology, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585 (Japan); Environmental Health Science Laboratory, Sumitomo Chemical Co., Ltd., 3-1-98 Kasugade-Naka, Konohana-ku, Osaka 554-8558 (Japan); Kakehashi, Anna [Department of Molecular Pathology, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585 (Japan); Sumida, Kayo; Kushida, Masahiko; Asano, Hiroyuki [Environmental Health Science Laboratory, Sumitomo Chemical Co., Ltd., 3-1-98 Kasugade-Naka, Konohana-ku, Osaka 554-8558 (Japan); Gi, Min [Department of Molecular Pathology, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585 (Japan); Wanibuchi, Hideki, E-mail: wani@med.osaka-cu.ac.jp [Department of Molecular Pathology, Osaka City University Graduate School of Medicine, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585 (Japan)

    2015-08-01

    To determine miRNAs and their predicted target proteins regulatory networks which are potentially involved in onset of pulmonary fibrosis in the bleomycin rat model, we conducted integrative miRNA microarray and iTRAQ-coupled LC-MS/MS proteomic analyses, and evaluated the significance of altered biological functions and pathways. We observed that alterations of miRNAs and proteins are associated with the early phase of bleomycin-induced pulmonary fibrosis, and identified potential target pairs by using ingenuity pathway analysis. Using the data set of these alterations, it was demonstrated that those miRNAs, in association with their predicted target proteins, are potentially involved in canonical pathways reflective of initial epithelial injury and fibrogenic processes, and biofunctions related to induction of cellular development, movement, growth, and proliferation. Prediction of activated functions suggested that lung cells acquire proliferative, migratory, and invasive capabilities, and resistance to cell death especially in the very early phase of bleomycin-induced pulmonary fibrosis. The present study will provide new insights for understanding the molecular pathogenesis of idiopathic pulmonary fibrosis. - Highlights: • We analyzed bleomycin-induced pulmonary fibrosis in the rat. • Integrative analyses of miRNA microarray and proteomics were conducted. • We determined the alterations of miRNAs and their potential target proteins. • The alterations may control biological functions and pathways in pulmonary fibrosis. • Our result may provide new insights of pulmonary fibrosis.

  7. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  8. Analysing workplace violence towards health care staff in public hospitals using alternative ordered response models: the case of north-eastern Turkey.

    Science.gov (United States)

    Çelik, Ali Kemal; Oktay, Erkan; Çebi, Kübranur

    2017-09-01

    The main objective of this article is to determine key factors that may have a significant effect on the verbal abuse, emotional abuse and physical assault of health care workers in north-eastern Turkey. A self-administered survey was completed by 450 health care workers in three well-established hospitals in Erzurum, Turkey. Because of the discrete and ordered nature of the dependent variable of the survey, the data were analysed using four distinctive ordered response models. Results revealed that several key variables were found to be a significant determinant of workplace violence, such as the type of health institution, occupational position, weekly working hours, weekly shift hours, number of daily patient contacts, age group of the respondents, experience in the health sector, training against workplace violence and current policies of the hospitals and the Turkish Ministry of Health.

  9. The evolution of eukaryotic cells from the perspective of peroxisomes: phylogenetic analyses of peroxisomal beta-oxidation enzymes support mitochondria-first models of eukaryotic cell evolution.

    Science.gov (United States)

    Bolte, Kathrin; Rensing, Stefan A; Maier, Uwe-G

    2015-02-01

    Beta-oxidation of fatty acids and detoxification of reactive oxygen species are generally accepted as being fundamental functions of peroxisomes. Additionally, these pathways might have been the driving force favoring the selection of this compartment during eukaryotic evolution. Here we performed phylogenetic analyses of enzymes involved in beta-oxidation of fatty acids in Bacteria, Eukaryota, and Archaea. These imply an alpha-proteobacterial origin for three out of four enzymes. By integrating the enzymes' history into the contrasting models on the origin of eukaryotic cells, we conclude that peroxisomes most likely evolved non-symbiotically and subsequent to the acquisition of mitochondria in an archaeal host cell. © 2015 WILEY Periodicals, Inc.

  10. System-level insights into the cellular interactome of a non-model organism: inferring, modelling and analysing functional gene network of soybean (Glycine max.

    Directory of Open Access Journals (Sweden)

    Yungang Xu

    Full Text Available Cellular interactome, in which genes and/or their products interact on several levels, forming transcriptional regulatory-, protein interaction-, metabolic-, signal transduction networks, etc., has attracted decades of research focuses. However, such a specific type of network alone can hardly explain the various interactive activities among genes. These networks characterize different interaction relationships, implying their unique intrinsic properties and defects, and covering different slices of biological information. Functional gene network (FGN, a consolidated interaction network that models fuzzy and more generalized notion of gene-gene relations, have been proposed to combine heterogeneous networks with the goal of identifying functional modules supported by multiple interaction types. There are yet no successful precedents of FGNs on sparsely studied non-model organisms, such as soybean (Glycine max, due to the absence of sufficient heterogeneous interaction data. We present an alternative solution for inferring the FGNs of soybean (SoyFGNs, in a pioneering study on the soybean interactome, which is also applicable to other organisms. SoyFGNs exhibit the typical characteristics of biological networks: scale-free, small-world architecture and modularization. Verified by co-expression and KEGG pathways, SoyFGNs are more extensive and accurate than an orthology network derived from Arabidopsis. As a case study, network-guided disease-resistance gene discovery indicates that SoyFGNs can provide system-level studies on gene functions and interactions. This work suggests that inferring and modelling the interactome of a non-model plant are feasible. It will speed up the discovery and definition of the functions and interactions of other genes that control important functions, such as nitrogen fixation and protein or lipid synthesis. The efforts of the study are the basis of our further comprehensive studies on the soybean functional

  11. Thermoeconomic and environmental analyses of a low water consumption combined steam power plant and refrigeration chillers – Part 1: Energy and economic modelling and analysis

    International Nuclear Information System (INIS)

    Ifaei, Pouya; Rashidi, Jouan; Yoo, ChangKyoo

    2016-01-01

    Highlights: • Proposing two energy systems by combining refrigeration chillers and power plants. • Model-based comparison of the systems through energy and economic standpoints. • Reducing total annual costs of the base system up to 4.7% by process integration. • Decreasing the water loss in wet cooling towers by up to 18% in the proposed system. • Suggesting a water-fuel economic management strategy based on parametric analysis. - Abstract: Two novel configurations are proposed to replace the conventional Rankine cycle based steam power plants (SPP) with natural draft wet cooling towers (NDWCT) as cooling units. Closed feedwater heater unit of the base SPP-NDWCT system is eliminated in order to combine a vapor compression refrigeration (VCR) and an absorption heat pump (ABHP) with the base SPP-NDWCT system. Both VCR-SPP-NDWCT and ABHP-SPP-NDWCT systems are integrated to decrease the NDWCT load which could result in water losses decrease. In part one of the presented two-part paper, model-based energy and economic analyses are performed to compare systems performance and applicability. The temperature difference at pinch point and temperature difference between the hot and cold sides of the heat exchangers which were used for systems integration in VCR-SPP-NDWCT, and the absorber pressure and temperature in ABHP-SPP-NDWCT system are studied using parametric analysis procedure. A water-fuel management strategy is also introduced for the ABHP-SPP-NDWCT system according to the influence of the absorber pressure changes on system water and fuel consumption. In part 2, environmental and thermoeconomic analyses are performed to complete a comprehensive study on designing steam power plants. The results of part 1 showed that water losses and total annual costs decreased by 1–18% and 0–4.7% for the ABHP-SPP-NDWCT system but increased by 11% and 60% for the VCR-SPP-NDWCT system, respectively.

  12. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  13. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  14. OIL POLLUTION IN INDONESIAN WATERS: COMBINING STATISTICAL ANALYSES OF ENVISAT ASAR AND SENTINEL-1A C-SAR DATA WITH NUMERICAL TRACER MODELLING

    Directory of Open Access Journals (Sweden)

    M. Gade

    2017-11-01

    Full Text Available This Pilot Study aimed at improving the information on the state of the Indonesian marine environment that is gained from satellite data. More than 2000 historical and actual synthetic aperture radar (SAR data from ENVISAT ASAR and Sentinel-1A/B C-SAR, respectively, were used to produce oil pollution density maps of two regions of interest (ROI in Indonesian waters. The normalized spill number and the normalized mean polluted area were calculated, and our findings indicate that in general, the marine oil pollution in both ROI is of different origin: while ship traffic appears to be the main source in the Java Sea, oil production industry causes the highest pollution rates in the Strait of Makassar. In most cases hot spots of marine oil pollution were found in the open sea, and the largest number of oil spills in the Java Sea was found from March to May and from September to December, i.e., during the transition from the north-west monsoon to the south-east monsoon, and vice versa. This is when the overall wind and current patterns change, thereby making oil pollution detection with SAR sensors easier. In support of our SAR image analyses high-resolution numerical forward and backward tracer experiments were performed. Using the previously gained information we identify strongly affected coastal areas (with most oil pollution being driven onshore, but also sensitive parts of major ship traffic lanes (where any oil pollution is likely to be driven into marine protected areas. Our results demonstrate the feasibility of our approach, to combine numerical tracer modelling with (visual SAR image analyses for an assessment of the marine environment in Indonesian waters, and they help in better understanding the observed seasonality.

  15. Mouse genetics and proteomic analyses demonstrate a critical role for complement in a model of DHRD/ML, an inherited macular degeneration

    Science.gov (United States)

    Garland, Donita L.; Fernandez-Godino, Rosario; Kaur, Inderjeet; Speicher, Kaye D.; Harnly, James M.; Lambris, John D.; Speicher, David W.; Pierce, Eric A.

    2014-01-01

    Macular degenerations, inherited and age related, are important causes of vision loss. Human genetic studies have suggested perturbation of the complement system is important in the pathogenesis of age-related macular degeneration. The mechanisms underlying the involvement of the complement system are not understood, although complement and inflammation have been implicated in drusen formation. Drusen are an early clinical hallmark of inherited and age-related forms of macular degeneration. We studied one of the earliest stages of macular degeneration which precedes and leads to the formation of drusen, i.e. the formation of basal deposits. The studies were done using a mouse model of the inherited macular dystrophy Doyne Honeycomb Retinal Dystrophy/Malattia Leventinese (DHRD/ML) which is caused by a p.Arg345Trp mutation in EFEMP1. The hallmark of DHRD/ML is the formation of drusen at an early age, and gene targeted Efemp1R345W/R345W mice develop extensive basal deposits. Proteomic analyses of Bruch's membrane/choroid and Bruch's membrane in the Efemp1R345W/R345W mice indicate that the basal deposits comprise normal extracellular matrix (ECM) components present in abnormal amounts. The proteomic analyses also identified significant changes in proteins with immune-related function, including complement components, in the diseased tissue samples. Genetic ablation of the complement response via generation of Efemp1R345W/R345W:C3−/− double-mutant mice inhibited the formation of basal deposits. The results demonstrate a critical role for the complement system in basal deposit formation, and suggest that complement-mediated recognition of abnormal ECM may participate in basal deposit formation in DHRD/ML and perhaps other macular degenerations. PMID:23943789

  16. Hydrological Assessment of Model Performance and Scenario Analyses of Land Use Change and Climate Change in lowlands of Veneto Region (Italy)

    Science.gov (United States)

    Pijl, Anton; Brauer, Claudia; Sofia, Giulia; Teuling, Ryan; Tarolli, Paolo

    2017-04-01

    Growing water-related challenges in lowland areas of the world call for good assessment of our past and present actions, in order to guide our future decisions. The novel Wageningen Lowland Runoff Simulator (WALRUS; Brauer et al., 2014) was developed to simulate hydrological processes and has showed promising performance in recent studies in the Netherlands. Here the model was applied to a coastal basin of 2800 ha in the Veneto Region (northern Italy) to test model performance and evaluate scenario analyses of land use change and climate change. Located partially below sea-level, the reclaimed area is facing persistent land transformation and climate change trends, which alter not only the processes in the catchment but also the demands from it (Tarolli and Sofia, 2016). Firstly results of the calibration (NSE = 0.77; year simulation, daily resolution) and validation (NSE = 0.53; idem) showed that the model is able to reproduce the dominant hydrological processes of this lowland area (e.g. discharge and groundwater fluxes). Land use scenarios between 1951 and 2060 were constructed using demographic models, supported by orthographic interpretation techniques. Climate scenarios were constructed by historical records and future projections by COSMO-CLM regional climate model (Rockel et al., 2008) under the RCP4.5 pathway. WALRUS simulations showed that the land use changes result in a wetter catchment with more discharge, and the climatic changes cause more extremes with longer droughts and stronger rain events. These changes combined show drier summers (-33{%} rainfall, +27{%} soil moisture deficit) and wetter (+13{%} rainfall) and intenser (+30{%} rain intensity) autumn and winters in the future. The simulated discharge regime -particularly peak flow- follows these polarising trends, in good agreement with similar studies in the geographical zone (e.g. Vezzoli et al., 2015). This will increase the pressure on the fully-artificial drainage and agricultural systems

  17. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  18. Light particle probes of expansion and temperature evolution: Coalescence model analyses of heavy ion collisions at 47A MeV

    International Nuclear Information System (INIS)

    Hagel, K.; Wada, R.; Cibor, J.; Lunardon, M.; Marie, N.; Alfaro, R.; Shen, W.; Xiao, B.; Zhao, Y.; Majka, Z.

    2000-01-01

    The reactions 12 C+ 116 Sn, 22 Ne+Ag, 40 Ar+ 100 Mo, and 64 Zn+ 89 Y have been studied at 47A MeV projectile energy. For these reactions the most violent collisions lead to increasing amounts of fragment and light particle emission as the projectile mass increases. This is consistent with quantum molecular dynamics (QMD) model simulations of the collisions. Moving source fits to the light charged particle data have been used to gain a global view of the evolution of the particle emission. Comparisons of the multiplicities and spectra of light charged particles emitted in the reactions with the four different projectiles indicate a common emission mechanism for early emitted ejectiles even though the deposited excitation energies differ greatly. The spectra for such ejectiles can be characterized as emission in the nucleon-nucleon frame. Evidence that the 3 He yield is dominated by this type of emission and the role of the collision dynamics in determining the 3 H/ 3 He yield ratio are discussed. Self-consistent coalescence model analyses are applied to the light cluster yields, in an attempt to probe emitter source sizes and to follow the evolution of the temperatures and densities from the time of first particle emission to equilibration. These analyses exploit correlations between ejectile energy and emission time, suggested by the QMD calculations. In this analysis the degree of expansion of the emitting system is found to increase with increasing projectile mass. The double isotope yield ratio temperature drops as the system expands. Average densities as low as 0.36ρ 0 are reached at a time near 100 fm/c after contact. Calorimetric methods were used to derive the mass and excitation energy of the excited nuclei which are present after preequilibrium emission. The derived masses range from 102 to 116 u and the derived excitation energies increase from 2.6 to 6.9 MeV/nucleon with increasing projectile mass. A caloric curve is derived for these expanded A∼110

  19. Boreal lakes moderate seasonal and diurnal temperature variation and perturb atmospheric circulation: analyses in the Community Earth System Model 1 (CESM1

    Directory of Open Access Journals (Sweden)

    William J. Riley

    2012-02-01

    Full Text Available We used a lake thermal physics model recently coupled into the Community Earth System Model 1 (CESM1 to study the effects of lake distribution in present and future climate. Under present climate, correcting the large underestimation of lake area in CESM1 (denoted CCSM4 in the configuration used here caused 1 °C spring decreases and fall increases in surface air temperature throughout large areas of Canada and the US. Simulated summer surface diurnal air temperature range decreased by up to 4 °C, reducing CCSM4 biases. These changes were much larger than those resulting from prescribed lake disappearance in some present-day permafrost regions under doubled-CO2 conditions. Correcting the underestimation of lake area in present climate caused widespread high-latitude summer cooling at 850 hPa. Significant remote changes included decreases in the strength of fall Southern Ocean westerlies. We found significantly different winter responses when separately analysing 45-yr subperiods, indicating that relatively long simulations are required to discern the impacts of surface changes on remote conditions. We also investigated the surface forcing of lakes using idealised aqua-planet experiments which showed that surface changes of 2 °C in the Northern Hemisphere extra-tropics could cause substantial changes in precipitation and winds in the tropics and Southern Hemisphere. Shifts in the Inter-Tropical Convergence Zone were opposite in sign to those predicted by some previous studies. Zonal mean circulation changes were consistent in character but much larger than those occurring in the lake distribution experiments, due to the larger magnitude and more uniform surface forcing in the idealised aqua-planet experiments.

  20. Boreal lakes moderate seasonal and diurnal temperature variation and perturb atmospheric circulation: Analyses in the Community Earth System Model 1 (CESM1)

    Energy Technology Data Exchange (ETDEWEB)

    Subin, Zachary M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Div.; Murphy, Lisa N. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Div.; Li, Fiyu [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Div.; Bonfils, Celine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Program for Climate Model Diagnosis and Intercomparison; Riley, William J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Div.

    2012-01-15

    We used a lake thermal physics model recently coupled into the Community Earth System Model 1 (CESM1) to study the effects of lake distribution in present and future climate. Under present climate, correcting the large underestimation of lake area in CESM1 (denoted CCSM4 in the configuration used here) caused 1 °C spring decreases and fall increases in surface air temperature throughout large areas of Canada and the US. Simulated summer surface diurnal air temperature range decreased by up to 4 °C, reducing CCSM4 biases. These changes were much larger than those resulting from prescribed lake disappearance in some present-day permafrost regions under doubled-CO2 conditions. Correcting the underestimation of lake area in present climate caused widespread high-latitude summer cooling at 850 hPa. Significant remote changes included decreases in the strength of fall Southern Ocean westerlies. We found significantly different winter responses when separately analysing 45-yr subperiods, indicating that relatively long simulations are required to discern the impacts of surface changes on remote conditions. We also investigated the surface forcing of lakes using idealised aqua-planet experiments which showed that surface changes of 2 °C in the Northern Hemisphere extra-tropics could cause substantial changes in precipitation and winds in the tropics and Southern Hemisphere. Shifts in the Inter-Tropical Convergence Zone were opposite in sign to those predicted by some previous studies. Zonal mean circulation changes were consistent in character but much larger than those occurring in the lake distribution experiments, due to the larger magnitude and more uniform surface forcing in the idealised aqua-planet experiments.

  1. Pertinence de l’utilisation du modèle de Toulmin dans l’analyse de corpus The relevance of Toulmin’s model in case studies

    Directory of Open Access Journals (Sweden)

    Emmanuel de Jonge

    2008-09-01

    la Seconde Guerre Mondiale constituera un anti-exemple dans lequel il faut puiser les moyens d’éviter la reproduction de l’événement. Nous utiliserons le terme de désenchantement pour qualifier cette disposition psycho-cognitive qui affecte toute la rhétorique. Au niveau des discours d’opposition, on constate une immixtion du génocide dans le politique, que ce soit dans les discours négationnistes, théories du complot, ou dans les discours sur l’esclavage, mettant en jeu une concurrence des victimes. La méthode d’analyse toulminienne permet cet éclairage, et établit ainsi un lien essentiel entre la rhétorique et l’analyse du discours.How can we use Toulmin’s model in the analysis of specific case studies? It seems that it has been almost exclusively used for either discussions in logical reasoning, or for theoretical discussions about probability. In this article, I want to show that Toulmin’s model can be very useful for Discourse Analysis since it allows the analyst to extract mostly implicit warrants and backings. In my view, the notion of backing is related, on the one hand, to its technical argumentative side, and on the other hand, to the psycho-cognitive foundation of any reasoning, linked both to cognitive dispositions and to cultural factors. The Toulminian analysis thus proves to be very interesting when discussing deep-level arguments in a text. The article is divided into two sections. The first one is devoted to a comparative analysis of three Declarations: the 1776 American Declaration of Independence, the 1789 Declaration of the Rights of Man and the Citizen, and the 1948 Universal Declaration of Human Rights. I assume that the declarations constitute the linguistic expression of the deep-level or “backing” of society’s discourses. They show which warrants the argumentation is based on, in other words, on the basis of which values the society is going to argue. In the specific context of democracies concerned with

  2. Combined analyses of costs, market value and eco-costs in circular business models : eco-efficient value creation in remanufacturing

    NARCIS (Netherlands)

    Vogtländer, J.G.; Scheepens, A.E.; Bocken, N.M.P.; Peck, D.P.

    2017-01-01

    Eco-efficient Value Creation is a method to analyse innovative product and service design together with circular business strategies. The method is based on combined analyses of the costs, market value (perceived customer value) and eco-costs. This provides a prevention-based single indicator for

  3. Mitochondrial DNA analyses and ecological niche modeling reveal post-LGM expansion of the Assam macaque (Macaca assamensis) in the foothills of Nepal Himalaya.

    Science.gov (United States)

    Khanal, Laxman; Chalise, Mukesh K; He, Kai; Acharya, Bipin K; Kawamoto, Yoshi; Jiang, Xuelong

    2018-03-01

    Genetic diversity of a species is influenced by multiple factors, including the Quaternary glacial-interglacial cycles and geophysical barriers. Such factors are not yet well documented for fauna from the southern border of the Himalayan region. This study used mitochondrial DNA (mtDNA) sequences and ecological niche modeling (ENM) to explore how the late Pleistocene climatic fluctuations and complex geography of the Himalayan region have shaped genetic diversity, population genetic structure, and demographic history of the Nepalese population of Assam macaques (Macaca assamensis) in the Himalayan foothills. A total of 277 fecal samples were collected from 39 wild troops over almost the entire distribution of the species in Nepal. The mtDNA fragment encompassing the complete control region (1121 bp) was recovered from 208 samples, thus defining 54 haplotypes. Results showed low nucleotide diversity (0.0075 ± SD 0.0001) but high haplotype diversity (0.965 ± SD 0.004). The mtDNA sequences revealed a shallow population genetic structure with a moderate but statistically significant effect of isolation by distance. Demographic history analyses using mtDNA sequences suggested a post-pleistocene population expansion. Paleodistribution reconstruction projected that the potential habitat of the Assam macaque was confined to the lower elevations of central Nepal during the Last Glacial Maximum. With the onset of the Holocene climatic optimum, the glacial refugia population experienced eastward range expansion to higher elevations. We conclude that the low genetic diversity and shallow population genetic structure of the Assam macaque population in the Nepal Himalaya region are the consequence of recent demographic and spatial expansion. © 2018 Wiley Periodicals, Inc.

  4. Persistent Monitoring of Urban Infrasound Phenomenology. Report 1: Modeling an Urban Environment for Acoustical Analyses using the 3-D Finite-Difference Time-Domain Program PSTOP3D

    Science.gov (United States)

    2015-08-01

    ER D C TR -1 5- 5 Remote Assessment of Critical Infrastructure Persistent Monitoring of Urban Infrasound Phenomenology Report 1...ERDC TR-15-5 August 2015 Persistent Monitoring of Urban Infrasound Phenomenology Report 1: Modeling an Urban Environment for Acoustical Analyses...From - To) 4. TITLE AND SUBTITLE Persistent Monitoring of Urban Infrasound Phenomenology ; Report 1: Modeling an Urban Environment for

  5. Development of a model performance-based sign sheeting specification based on the evaluation of nighttime traffic signs using legibility and eye-tracker data : data and analyses.

    Science.gov (United States)

    2010-09-01

    This report presents data and technical analyses for Texas Department of Transportation Project 0-5235. This : project focused on the evaluation of traffic sign sheeting performance in terms of meeting the nighttime : driver needs. The goal was to de...

  6. Conceptual aspects: analyses law, ethical, human, technical, social factors of development ICT, e-learning and intercultural development in different countries setting out the previous new theoretical model and preliminary findings

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Smyrnova-Trybulska, Eugenia; Morze, Natalia; Issa, Tomayess; Issa, Theodora

    2015-01-01

    This paper, prepared by an international team of authors focuses on the conceptual aspects: analyses law, ethical, human, technical, social factors of ICT development, e-learning and intercultural development in different countries, setting out the previous and new theoretical model and preliminary

  7. Onset of slugging criterion based on singular points and stability analyses of transient one-dimensional two-phase flow equations of two-fluid model

    International Nuclear Information System (INIS)

    Sung, Chang Kyung; Chun, Moon Hyun

    1996-01-01

    A two-step approach has been used to obtain a new criterion for the onset of slug formation : (1) In the first step, a more general expression than the existing models for the onset of slug flow criterion has been derived from the analysis of singular points and neutral stability conditions of the transient one-dimensional two-phase flow equations of two-fluid model. (2) In the second step, introducing simplifications and incorporating a parameter into the general expression obtained in the first step to satisfy a number of physical conditions a priori specified, a new simple criterion for the onset of slug flow has been derived. Comparisons of the present model with existing models and experimental data show that the present model agree very closely with Taitel and Dukler's model and experimental data in horizontal pipes. In an inclined pipe (θ=50 deg ), however, the difference between the predictions of the present model and those of existing models is appreciably large and the present model gives the best agreement with Ohnuki et al.'s data. 17 refs., 5 figs., 1 tab. (author)

  8. Proteomic Analyses of the Acute Tissue Response for Explant Rabbit Corneas and Engineered Corneal Tissue Models Following In Vitro Exposure to 1540 nm Laser Light

    National Research Council Canada - National Science Library

    Eurell, T. E; Johnson, T. E; Roach, W. P

    2005-01-01

    Two-dimensional electrophoresis and histomorphometry were used to determine if equivalent protein changes occurred within native rabbit corneas and engineered corneal tissue models following in vitro...

  9. Steady- and transient-state analyses of fully ceramic microencapsulated fuel loaded reactor core via two-temperature homogenized thermal-conductivity model

    International Nuclear Information System (INIS)

    Lee, Yoonhee; Cho, Nam Zin

    2015-01-01

    Highlights: • Fully ceramic microencapsulated fuel-loaded core is analyzed via a two-temperature homogenized thermal-conductivity model. • The model is compared to harmonic- and volumetric-average thermal conductivity models. • The three thermal analysis models show ∼100 pcm differences in the k eff eigenvalue. • The three thermal analysis models show more than 70 K differences in the maximum temperature. • There occur more than 3 times differences in the maximum power for a control rod ejection accident. - Abstract: Fully ceramic microencapsulated (FCM) fuel, a type of accident-tolerant fuel (ATF), consists of TRISO particles randomly dispersed in a SiC matrix. In this study, for a thermal analysis of the FCM fuel with such a high heterogeneity, a two-temperature homogenized thermal-conductivity model was applied by the authors. This model provides separate temperatures for the fuel-kernels and the SiC matrix. It also provides more realistic temperature profiles than those of harmonic- and volumetric-average thermal conductivity models, which are used for thermal analysis of a fuel element in VHTRs having a composition similar to the FCM fuel, because such models are unable to provide the fuel-kernel and graphite matrix temperatures separately. In this study, coupled with a neutron diffusion model, a FCM fuel-loaded reactor core is analyzed via a two-temperature homogenized thermal-conductivity model at steady- and transient-states. The results are compared to those from harmonic- and volumetric-average thermal conductivity models, i.e., we compare k eff eigenvalues, power distributions, and temperature profiles in the hottest single-channel at steady-state. At transient-state, we compare total powers, reactivity, and maximum temperatures in the hottest single-channel obtained by the different thermal analysis models. The different thermal analysis models and the availability of fuel-kernel temperatures in the two-temperature homogenized thermal

  10. Palivizumab for immunoprophylaxis of respiratory syncytial virus (RSV) bronchiolitis in high-risk infants and young children: a systematic review and additional economic modelling of subgroup analyses.

    Science.gov (United States)

    Wang, D; Bayliss, S; Meads, C

    2011-01-01

    find any relevant studies that may have been missed. The risk factors identified from the systematic review of included studies were analysed and synthesised using stata. The base-case decision tree model developed in the original HTA journal publication [Health Technol Assess 2008;12(36)] was used to derive the cost-effectiveness of immunoprophylaxis of RSV using palivizumab in different subgroups of pre-term infants and young children who are at high risk of serious morbidity from RSV infection. Cost-effective spectra of prophylaxis with palivizumab compared with no prophylaxis for children without CLD/CHD, children with CLD, children with acyanotic CHD and children with cyanotic CHD were derived. Thirteen studies were included in this analysis. Analysis of 16,128 subgroups showed that prophylaxis with palivizumab may be cost-effective [at a willingness-to-pay threshold of £30,000/quality-adjusted life-year (QALY)] for some subgroups. For example, for children without CLD or CHD, the cost-effective subgroups included children under 6 weeks old at the start of the RSV season who had at least two other risk factors that were considered in this report and were born at 24 weeks gestational age (GA) or less, but did not include children who were > 9 months old at the start of the RSV season or had a GA of > 32 weeks. For children with CLD, the cost-effective subgroups included children 21 months old at the start of the RSV season. For children with acyanotic CHD, the cost-effective subgroups included children 21 months old at the start of the RSV season. For children with cyanotic CHD, the cost-effective subgroups included children 12 months old at the start of the RSV season. The poor quality of the studies feeding numerical results into this analysis means that the true cost-effectiveness may vary considerably from that estimated here. There is a risk that the relatively high mathematical precision of the point estimates of cost-effectiveness may be quite inaccurate

  11. Inventory of Simulation and Modeling for the Analysis of Ground Manoeuvre Performance (inventarisatie van vragen en modellen voor de analyse van het grondgebonden optreden)

    Science.gov (United States)

    2006-04-01

    2007. Voor staftrainingen op bataijons- en brigadeniveau wordt in Duitsland bet model GUPPIS gebruikt. Dit model is vergelijkbaar aan KIBOWI, waarbij de...GESI SMARTT • 22 GUPPIS . 22 C2WS . 33 CAEN. 20, 107 CAMEX. 19 J CASTFOREM • 20, 56, 96 CASTOR . 17.25, 27 INDIA- 17, 68, 71 CAT. 19, 21, 48 ITEMS

  12. Rainfall model investigation and scenario analyses of the effect of government reforestation policy on seasonal rainfalls: A case study from Northern Thailand

    Science.gov (United States)

    Duangdai, Eakkapong; Likasiri, Chulin

    2017-03-01

    In this work, 4 models for predicting rainfall amounts are investigated and compared using Northern Thailand's seasonal rainfall data for 1973-2008. Two models, global temperature, forest area and seasonal rainfall (TFR) and modified TFR based on a system of differential equations, give the relationships between global temperature, Northern Thailand's forest cover and seasonal rainfalls in the region. The other two models studied are time series and Autoregressive Moving Average (ARMA) models. All models are validated using the k-fold cross validation method with the resulting errors being 0.971233, 0.740891, 2.376415 and 2.430891 for time series, ARMA, TFR and modified TFR models, respectively. Under Business as Usual (BaU) scenario, seasonal rainfalls in Northern Thailand are projected through the year 2020 using all 4 models. TFR and modified TFR models are also used to further analyze how global temperature rise and government reforestation policy affect seasonal rainfalls in the region. Rainfall projections obtained via the two models are also compared with those from the International Panel on Climate Change (IPCC) under IS92a scenario. Results obtained through a mathematical model for global temperature, forest area and seasonal rainfall show that the higher the forest cover, the less fluctuation there is between rainy-season and summer rainfalls. Moreover, growth in forest cover also correlates with an increase in summer rainfalls. An investigation into the relationship between main crop productions and rainfalls in dry and rainy seasons indicates that if the rainy-season rainfall is high, that year's main-crop rice production will decrease but the second-crop rice, maize, sugarcane and soybean productions will increase in the following year.

  13. Assessment of anti-inflammatory and anti-arthritic properties of Acmella uliginosa (Sw. Cass. based on experiments in arthritic rat models and qualitative GC/MS analyses.

    Directory of Open Access Journals (Sweden)

    Subhashis Paul

    2016-09-01

    of AU and AV showed the best recovery potential in all the studied parameters, confirming the synergistic efficacy of the herbal formulation. GC/MS analyses revealed the presence of at least 5 anti-inflammatory compounds including 9-octadecenoic acid (Z-, phenylmethyl ester, astaxanthin, à-N-Normethadol, fenretinide that have reported anti-inflammatory/anti-arthritic properties. Conclusion: Our findings indicated that the crude flower homogenate of AU contains potential anti-inflammatory compounds which could be used as an anti-inflammatory/anti-arthritic medication. [J Complement Med Res 2016; 5(3.000: 257-262

  14. Hydraulic properties of fracture networks; Analyse des proprietes hydrauliques des reseaux de fractures: discussion des modeles d'ecoulement compatibles avec les principales proprietes geometriques

    Energy Technology Data Exchange (ETDEWEB)

    Dreuzy, J.R. de

    1999-12-15

    Fractured medium are studied in the general framework of oil and water supply and more recently for the underground storage of high level nuclear wastes. As fractures are generally far more permeable than the embedding medium, flow is highly channeled in a complex network of fractures. The complexity of the network comes from the broad distributions of fracture length and permeability at the fracture scale and appears through the increase of the equivalent permeability at the network scale. The goal of this thesis is to develop models of fracture networks consistent with both local-scale and global-scale observations. Bidimensional models of fracture networks display a wide variety of flow structures ranging from the sole permeable fracture to the equivalent homogeneous medium. The type of the relevant structure depends not only on the density and the length and aperture distributions but also on the observation scale. In several models, a crossover scale separates complex structures highly channeled from more distributed and homogeneous-like flow patterns at larger scales. These models, built on local characteristics and validated by global properties, have been settled in steady state. They have also been compared to natural well test data obtained in Ploemeur (Morbihan) in transient state. The good agreement between models and data reinforces the relevance of the models. Once validated and calibrated, the models are used to estimate the global tendencies of the main flow properties and the risk associated with the relative lack of data on natural fractures media. (author)

  15. Return-on-Investment (ROI) Analyses of an Inpatient Lay Health Worker Model on 30-Day Readmission Rates in a Rural Community Hospital.

    Science.gov (United States)

    Cardarelli, Roberto; Bausch, Gregory; Murdock, Joan; Chyatte, Michelle Renee

    2017-07-07

    The purpose of the study was to assess the return-on-investment (ROI) of an inpatient lay health worker (LHW) model in a rural Appalachian community hospital impacting 30-day readmission rates. The Bridges to Home (BTH) study completed an evaluation in 2015 of an inpatient LHW model in a rural Kentucky hospital that demonstrated a reduction in 30-day readmission rates by 47.7% compared to a baseline period. Using the hospital's utilization and financial data, a validated ROI calculator specific to care transition programs was used to assess the ROI of the BTH model comparing 3 types of payment models including Diagnosis Related Group (DRG)-only payments, pay-for-performance (P4P) contracts, and accountable care organizations (ACOs). The BTH program had a -$0.67 ROI if the hospital had only a DRG-based payment model. If the hospital had P4P contracts with payers and 0.1% of its annual operating revenue was at risk, the ROI increased to $7.03 for every $1 spent on the BTH program. However, if the hospital was an ACO as was the case for this study's community hospital, the ROI significantly increased to $38.48 for every $1 spent on the BTH program. The BTH model showed a viable ROI to be considered by community hospitals that are part of an ACO or P4P program. A LHW care transition model may be a cost-effective alternative for impacting excess 30-day readmissions and avoiding associated penalties for hospital systems with a value-based payment model. © 2017 National Rural Health Association.

  16. Periodic safety analyses

    International Nuclear Information System (INIS)

    Gouffon, A.; Zermizoglou, R.

    1990-12-01

    The IAEA Safety Guide 50-SG-S8 devoted to 'Safety Aspects of Foundations of Nuclear Power Plants' indicates that operator of a NPP should establish a program for inspection of safe operation during construction, start-up and service life of the plant for obtaining data needed for estimating the life time of structures and components. At the same time the program should ensure that the safety margins are appropriate. Periodic safety analysis are an important part of the safety inspection program. Periodic safety reports is a method for testing the whole system or a part of the safety system following the precise criteria. Periodic safety analyses are not meant for qualification of the plant components. Separate analyses are devoted to: start-up, qualification of components and materials, and aging. All these analyses are described in this presentation. The last chapter describes the experience obtained for PWR-900 and PWR-1300 units from 1986-1989

  17. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...... and finally data analysis based on the ISO approach. The device was calibrated and tested on commercially available laser systems. It showed good reproducibility. It was the target to be able to measure CW lasers with a power up to 200 W, focused down to spot diameters in the range of 10µm. In order...

  18. Experimental and in silico modelling analyses of the gene expression pathway for recombinant antibody and by-product production in NS0 cell lines.

    Directory of Open Access Journals (Sweden)

    Emma J Mead

    Full Text Available Monoclonal antibodies are commercially important, high value biotherapeutic drugs used in the treatment of a variety of diseases. These complex molecules consist of two heavy chain and two light chain polypeptides covalently linked by disulphide bonds. They are usually expressed as recombinant proteins from cultured mammalian cells, which are capable of correctly modifying, folding and assembling the polypeptide chains into the native quaternary structure. Such recombinant cell lines often vary in the amounts of product produced and in the heterogeneity of the secreted products. The biological mechanisms of this variation are not fully defined. Here we have utilised experimental and modelling strategies to characterise and define the biology underpinning product heterogeneity in cell lines exhibiting varying antibody expression levels, and then experimentally validated these models. In undertaking these studies we applied and validated biochemical (rate-constant based and engineering (nonlinear models of antibody expression to experimental data from four NS0 cell lines with different IgG4 secretion rates. The models predict that export of the full antibody and its fragments are intrinsically linked, and cannot therefore be manipulated individually at the level of the secretory machinery. Instead, the models highlight strategies for the manipulation at the precursor species level to increase recombinant protein yields in both high and low producing cell lines. The models also highlight cell line specific limitations in the antibody expression pathway.

  19. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  20. Effects of Added Enzymes on Sorted, Unsorted and Sorted-Out Barley: A Model Study on Realtime Viscosity and Process Potentials Using Rapid Visco Analyser

    DEFF Research Database (Denmark)

    Shetty, Radhakrishna; Zhuang, Shiwen; Olsen, Rasmus Lyngsø

    2017-01-01

    Barley sorting is an important step for selecting grain of required quality for malting prior to brewing. However, brewing with unmalted barley with added enzymes has been thoroughly proven, raising the question of whether traditional sorting for high quality malting-barley is still necessary....... To gain more insight on this, we examine realtime viscosity of sorted-out and unsorted barley during downscaled mashing with added enzymes in comparison with malting quality sorted barley. A rapid visco analyser was used to simulate brewery mashing process at lab scale together with two commercial enzymes...... (Ondea®-Pro and Cellic®-CTec2). During downscaled mashing, viscosity profile of sorted-out barley was markedly different from others, irrespective of enzyme type, whereas a small difference was observed between the sorted and un-sorted barley. Furthermore, whilst sorted-out barley generated lowest sugar...

  1. Dialogisk kommunikationsteoretisk analyse

    DEFF Research Database (Denmark)

    Phillips, Louise Jane

    2018-01-01

    analysemetode, der er blevet udviklet inden for dialogisk kommunikationsforskning - The Integrated Framework for Analysing Dialogic Knowledge Production and Communication (IFADIA). IFADIA-metoden bygger på en kombination af Bakhtins dialogteori og Foucaults teori om magt/viden og diskurs. Metoden er beregnet...

  2. Probabilistic safety analyses (PSA)

    International Nuclear Information System (INIS)

    1997-01-01

    The guide shows how the probabilistic safety analyses (PSA) are used in the design, construction and operation of light water reactor plants in order for their part to ensure that the safety of the plant is good enough in all plant operational states

  3. Meta-analyses

    NARCIS (Netherlands)

    Hendriks, Maria A.; Luyten, Johannes W.; Scheerens, Jaap; Sleegers, P.J.C.; Scheerens, J

    2014-01-01

    In this chapter results of a research synthesis and quantitative meta-analyses of three facets of time effects in education are presented, namely time at school during regular lesson hours, homework, and extended learning time. The number of studies for these three facets of time that could be used

  4. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    . Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  5. Wavelet Analyses and Applications

    Science.gov (United States)

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  6. Filmstil - teori og analyse

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    Filmstil påvirker på afgørende vis vores oplevelse af film. Men filmstil, måden, de levende billeder organiserer fortællingen på fylder noget mindre end filmens handling, når vi taler om film. Filmstil - teori og analyse er en rigt eksemplificeret præsentation, kritik og videreudvikling af...

  7. Risico-analyse brandstofpontons

    NARCIS (Netherlands)

    Uijt de Haag P; Post J; LSO

    2001-01-01

    Voor het bepalen van de risico's van brandstofpontons in een jachthaven is een generieke risico-analyse uitgevoerd. Er is een referentiesysteem gedefinieerd, bestaande uit een betonnen brandstofponton met een relatief grote inhoud en doorzet. Aangenomen is dat de ponton gelegen is in een

  8. Role of aerosol uptake in controlling the oxidation of isoprene through reaction with NO3: model analyses of aircraft observations during RONOCO campaign

    Science.gov (United States)

    Biancofiore, Fabio; Di Carlo, Piero; Aruffo, Eleonora; Dari-Salisburgo, Cesare; Busilacchio, Marcella; Giammaria, Franco; Reeves, Claire; Moller, Sarah; Lee, James; Evans, Mathew J.; Stone, Daniel; Jones, Rod L.; Ouyang, Bin

    2013-04-01

    Isoprene is the most emitted non-methane biogenic volatile organic compound. Despite its high reactivity, late-day emission remain in atmosphere after the sunset affecting the nighttime chemistry of NOy (NO, NO2, HNO3, total peroxy nitrates (ΣRO2NO2), total alkyl nitrates (ΣRONO2), N2O5, NO3). Nocturnal observations of total peroxy nitrates and total alkyl nitrates carried out during summer 2010 above the United Kingdom (RONOCO campaign, on-board the Bae-146 aircraft) are analyzed using a tropospheric chemistry box-model (DSMACC). The model includes a detailed description of gas-phase atmospheric chemistry, the Master Chemical Mechanism (MCM) v3.2. In this work aerosol uptake of peroxy nitrates and alkyl nitrates has been added to the normal mechanism. In order to investigate the role of aerosol uptake on the peroxy nitrates and alkyl nitrates, sensitivity tests involving the aerosol surface and the uptake coefficient are carried out. Results show different responses of the model performances in simulating the concentrations of alkyl nitrates and peroxy nitrates to changes of the aerosol surface and the uptake coefficient: the decrease of the first impacts both the modeled alkyl nitrates and peroxy nitrates with an increase of its concentrations, resulting in an overestimations of peroxy nitrates whereas alkyl nitrates tend to be in better agreement with the measured data. Also decreasing the uptake coefficient has similar effects of the decrease of aerosol surface. The increase of the aerosol uptake causes a decrease of modeled alkyl nitrates and peroxy nitrates concentrations, resulting in a better agreement between measured and modeled peroxy nitrates. The impact of aerosol uptake on the NOy nighttime chemistry and on the isoprene degradation is discussed, as well.

  9. Applications of stochastic models and geostatistical analyses to study sources and spatial patterns of soil heavy metals in a metalliferous industrial district of China

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Buqing; Liang, Tao, E-mail: liangt@igsnrr.ac.cn; Wang, Lingqing; Li, Kexin

    2014-08-15

    An extensive soil survey was conducted to study pollution sources and delineate contamination of heavy metals in one of the metalliferous industrial bases, in the karst areas of southwest China. A total of 597 topsoil samples were collected and the concentrations of five heavy metals, namely Cd, As (metalloid), Pb, Hg and Cr were analyzed. Stochastic models including a conditional inference tree (CIT) and a finite mixture distribution model (FMDM) were applied to identify the sources and partition the contribution from natural and anthropogenic sources for heavy metal in topsoils of the study area. Regression trees for Cd, As, Pb and Hg were proved to depend mostly on indicators of anthropogenic activities such as industrial type and distance from urban area, while the regression tree for Cr was found to be mainly influenced by the geogenic characteristics. The FMDM analysis showed that the geometric means of modeled background values for Cd, As, Pb, Hg and Cr were close to their background values previously reported in the study area, while the contamination of Cd and Hg were widespread in the study area, imposing potentially detrimental effects on organisms through the food chain. Finally, the probabilities of single and multiple heavy metals exceeding the threshold values derived from the FMDM were estimated using indicator kriging (IK) and multivariate indicator kriging (MVIK). The high probabilities exceeding the thresholds of heavy metals were associated with metalliferous production and atmospheric deposition of heavy metals transported from the urban and industrial areas. Geostatistics coupled with stochastic models provide an effective way to delineate multiple heavy metal pollution to facilitate improved environmental management. - Highlights: • Conditional inference tree can identify variables controlling metal distribution. • Finite mixture distribution model can partition natural and anthropogenic sources. • Geostatistics with stochastic models

  10. HCV kinetic and modeling analyses project shorter durations to cure under combined therapy with daclatasvir and asunaprevir in chronic HCV-infected patients.

    Directory of Open Access Journals (Sweden)

    Laetitia Canini

    Full Text Available High cure rates are achieved in HCV genotype-1b patients treated with daclatasvir and asunaprevir, DCV/ASV. Here we analyzed early HCV kinetics in genotype-1b infected Japanese subjects treated with DCV/ASV and retrospectively projected, using mathematical modeling, whether shorter treatment durations might be effective.HCV RNA levels were measured frequently during DCV/ASV therapy in 95 consecutively treated patients at a single center in Japan. Mathematical modeling was used to predict the time to cure, i.e, <1 virus copy in the extracellular body fluid. Patients with HCV<15 IU/ml at week 1 (n = 27 were excluded from modeling analysis due to insufficient HCV RNA data points.Eighty nine of the 95 included patients (94% achieved cure, 3 (3% relapsed due to treatment-emergent resistance, and 3 (3% completed therapy but were lost during follow up. Model fits from 68 patients with sufficient data points indicate that after a short pharmacological delay (15.4 min [relative standard error, rse = 26%], DCV/ASV effectiveness in blocking HCV production was 0.999 [rse~0%], HCV half-life in blood was t1/2 = 1.7 hr [rse = 21%], and HCV-infected cell loss rate was 0.391/d [rse = 5%]. Modeling predicted that 100% and 98.5% of patients who had HCV<15 IU/ml at days 14 and 28 might have been cured with 6 and 8 weeks of therapy, respectively. There was a trend (p = 0.058 between younger age and shorter time to cure.Modeling early HCV kinetics under DCV/ASV predicts that most patients would achieve cure with short treatment durations, suggesting that 24 weeks of DCV/ASV treatment can be significantly shortened.

  11. HCV kinetic and modeling analyses project shorter durations to cure under combined therapy with daclatasvir and asunaprevir in chronic HCV-infected patients.

    Science.gov (United States)

    Canini, Laetitia; Imamura, Michio; Kawakami, Yoshiiku; Uprichard, Susan L; Cotler, Scott J; Dahari, Harel; Chayama, Kazuaki

    2017-01-01

    High cure rates are achieved in HCV genotype-1b patients treated with daclatasvir and asunaprevir, DCV/ASV. Here we analyzed early HCV kinetics in genotype-1b infected Japanese subjects treated with DCV/ASV and retrospectively projected, using mathematical modeling, whether shorter treatment durations might be effective. HCV RNA levels were measured frequently during DCV/ASV therapy in 95 consecutively treated patients at a single center in Japan. Mathematical modeling was used to predict the time to cure, i.e, <1 virus copy in the extracellular body fluid. Patients with HCV<15 IU/ml at week 1 (n = 27) were excluded from modeling analysis due to insufficient HCV RNA data points. Eighty nine of the 95 included patients (94%) achieved cure, 3 (3%) relapsed due to treatment-emergent resistance, and 3 (3%) completed therapy but were lost during follow up. Model fits from 68 patients with sufficient data points indicate that after a short pharmacological delay (15.4 min [relative standard error, rse = 26%]), DCV/ASV effectiveness in blocking HCV production was 0.999 [rse~0%], HCV half-life in blood was t1/2 = 1.7 hr [rse = 21%], and HCV-infected cell loss rate was 0.391/d [rse = 5%]. Modeling predicted that 100% and 98.5% of patients who had HCV<15 IU/ml at days 14 and 28 might have been cured with 6 and 8 weeks of therapy, respectively. There was a trend (p = 0.058) between younger age and shorter time to cure. Modeling early HCV kinetics under DCV/ASV predicts that most patients would achieve cure with short treatment durations, suggesting that 24 weeks of DCV/ASV treatment can be significantly shortened.

  12. Applications of stochastic models and geostatistical analyses to study sources and spatial patterns of soil heavy metals in a metalliferous industrial district of China

    International Nuclear Information System (INIS)

    Zhong, Buqing; Liang, Tao; Wang, Lingqing; Li, Kexin

    2014-01-01

    An extensive soil survey was conducted to study pollution sources and delineate contamination of heavy metals in one of the metalliferous industrial bases, in the karst areas of southwest China. A total of 597 topsoil samples were collected and the concentrations of five heavy metals, namely Cd, As (metalloid), Pb, Hg and Cr were analyzed. Stochastic models including a conditional inference tree (CIT) and a finite mixture distribution model (FMDM) were applied to identify the sources and partition the contribution from natural and anthropogenic sources for heavy metal in topsoils of the study area. Regression trees for Cd, As, Pb and Hg were proved to depend mostly on indicators of anthropogenic activities such as industrial type and distance from urban area, while the regression tree for Cr was found to be mainly influenced by the geogenic characteristics. The FMDM analysis showed that the geometric means of modeled background values for Cd, As, Pb, Hg and Cr were close to their background values previously reported in the study area, while the contamination of Cd and Hg were widespread in the study area, imposing potentially detrimental effects on organisms through the food chain. Finally, the probabilities of single and multiple heavy metals exceeding the threshold values derived from the FMDM were estimated using indicator kriging (IK) and multivariate indicator kriging (MVIK). The high probabilities exceeding the thresholds of heavy metals were associated with metalliferous production and atmospheric deposition of heavy metals transported from the urban and industrial areas. Geostatistics coupled with stochastic models provide an effective way to delineate multiple heavy metal pollution to facilitate improved environmental management. - Highlights: • Conditional inference tree can identify variables controlling metal distribution. • Finite mixture distribution model can partition natural and anthropogenic sources. • Geostatistics with stochastic models

  13. Proteomic and biochemical analyses reveal the activation of unfolded protein response, ERK-1/2 and ribosomal protein S6 signaling in experimental autoimmune myocarditis rat model

    Directory of Open Access Journals (Sweden)

    Kim Chan

    2011-10-01

    Full Text Available Abstract Background To investigate the molecular and cellular pathogenesis underlying myocarditis, we used an experimental autoimmune myocarditis (EAM-induced heart failure rat model that represents T cell mediated postinflammatory heart disorders. Results By performing unbiased 2-dimensional electrophoresis of protein extracts from control rat heart tissues and EAM rat heart tissues, followed by nano-HPLC-ESI-QIT-MS, 67 proteins were identified from 71 spots that exhibited significantly altered expression levels. The majority of up-regulated proteins were confidently associated with unfolded protein responses (UPR, while the majority of down-regulated proteins were involved with the generation of precursor metabolites and energy metabolism in mitochondria. Although there was no difference in AKT signaling between EAM rat heart tissues and control rat heart tissues, the amounts and activities of extracellular signal-regulated kinase (ERK-1/2 and ribosomal protein S6 (rpS6 were significantly increased. By comparing our data with the previously reported myocardial proteome of the Coxsackie viruses of group B (CVB-mediated myocarditis model, we found that UPR-related proteins were commonly up-regulated in two murine myocarditis models. Even though only two out of 29 down-regulated proteins in EAM rat heart tissues were also dysregulated in CVB-infected rat heart tissues, other proteins known to be involved with the generation of precursor metabolites and energy metabolism in mitochondria were also dysregulated in CVB-mediated myocarditis rat heart tissues, suggesting that impairment of mitochondrial functions may be a common underlying mechanism of the two murine myocarditis models. Conclusions UPR, ERK-1/2 and S6RP signaling were activated in both EAM- and CVB-induced myocarditis murine models. Thus, the conserved components of signaling pathways in two murine models of acute myocarditis could be targets for developing new therapeutic drugs or

  14. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  15. Validation of a CFD model by using 3D sonic anemometers to analyse the air velocity generated by an air-assisted sprayer equipped with two axial fans.

    Science.gov (United States)

    García-Ramos, F Javier; Malón, Hugo; Aguirre, A Javier; Boné, Antonio; Puyuelo, Javier; Vidal, Mariano

    2015-01-22

    A computational fluid dynamics (CFD) model of the air flow generated by an air-assisted sprayer equipped with two axial fans was developed and validated by practical experiments in the laboratory. The CFD model was developed by considering the total air flow supplied by the sprayer fan to be the main parameter, rather than the outlet air velocity. The model was developed for three air flows corresponding to three fan blade settings and assuming that the sprayer is stationary. Actual measurements of the air velocity near the sprayer were taken using 3D sonic anemometers. The workspace sprayer was divided into three sections, and the air velocity was measured in each section on both sides of the machine at a horizontal distance of 1.5, 2.5, and 3.5 m from the machine, and at heights of 1, 2, 3, and 4 m above the ground The coefficient of determination (R2) between the simulated and measured values was 0.859, which demonstrates a good correlation between the simulated and measured data. Considering the overall data, the air velocity values produced by the CFD model were not significantly different from the measured values.

  16. 3D object-oriented image analysis in 3D geophysical modelling : Analysing the central part of the East African Rift System

    NARCIS (Netherlands)

    Fadel, I.E.A.M.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity–density conversion formulas or user interpretation of the 3D subsurface structures (objects)

  17. Peer Review of “LDT Weight Reduction Study with Crash Model, Feasibility and Detailed Cost Analyses – Chevrolet Silverado 1500 Pickup”

    Science.gov (United States)

    The contractor will conduct an independent peer review of FEV’s light-duty truck (LDT) mass safety study, “Light-Duty Vehicle Weight Reduction Study with Crash Model, Feasibility and Detailed Cost Analysis – Silverado 1500”, and its corresponding computer-aided engineering (CAE) ...

  18. Connecting micro and macro: bringing case-studies and model-based approaches together in analysing patterns of vulnerability to global environmental change

    NARCIS (Netherlands)

    Dijk, van J.W.M.

    2012-01-01

    The objective of this project is to build bridges between quantitative system dynamic simulation models that are developed at PBL (IMAGE/GISMO) and qualitative case-studies by attempting to upscale lessons learned from local case-studies through Qualitative Comparative Analysis (QCA) and by

  19. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: From cognitive maps to agent-based models

    NARCIS (Netherlands)

    El-Sawah, Sondoss; Guillaume, Joseph H.A.; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J.

    2015-01-01

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation

  20. Validation of a CFD Model by Using 3D Sonic Anemometers to Analyse the Air Velocity Generated by an Air-Assisted Sprayer Equipped with Two Axial Fans

    Directory of Open Access Journals (Sweden)

    F. Javier García-Ramos

    2015-01-01

    Full Text Available A computational fluid dynamics (CFD model of the air flow generated by an air-assisted sprayer equipped with two axial fans was developed and validated by practical experiments in the laboratory. The CFD model was developed by considering the total air flow supplied by the sprayer fan to be the main parameter, rather than the outlet air velocity. The model was developed for three air flows corresponding to three fan blade settings and assuming that the sprayer is stationary. Actual measurements of the air velocity near the sprayer were taken using 3D sonic anemometers. The workspace sprayer was divided into three sections, and the air velocity was measured in each section on both sides of the machine at a horizontal distance of 1.5, 2.5, and 3.5 m from the machine, and at heights of 1, 2, 3, and 4 m above the ground The coefficient of determination (R2 between the simulated and measured values was 0.859, which demonstrates a good correlation between the simulated and measured data. Considering the overall data, the air velocity values produced by the CFD model were not significantly different from the measured values.

  1. Electroencephalographic precursors of spike-wave discharges in a genetic rat model of absence epilepsy: Power spectrum and coherence EEG analyses

    NARCIS (Netherlands)

    Sitnikova, E.Y.; Luijtelaar, E.L.J.M. van

    2009-01-01

    Periods in the electroencephalogram (EEG) that immediately precede the onset of spontaneous spike-wave discharges (SWD) were examined in WAG/Rij rat model of absence epilepsy. Precursors of SWD (preSWD) were classified based on the distribution of EEG power in delta-theta-alpha frequency bands as

  2. Computational Fluid Dynamic Analyses for the High-Lift Common Research Model Using the USM3D and FUN3D Flow Solvers

    Science.gov (United States)

    Rivers, Melissa; Hunter, Craig; Vatsa, Veer

    2017-01-01

    Two Navier-Stokes codes were used to compute flow over the High-Lift Common Research Model (HL-CRM) in preparation for a wind tunnel test to be performed at the NASA Langley Research Center 14-by-22-Foot Subsonic Tunnel in fiscal year 2018. Both flight and wind tunnel conditions were simulated by the two codes at set Mach numbers and Reynolds numbers over a full angle-of-attack range for three configurations: cruise, landing and takeoff. Force curves, drag polars and surface pressure contour comparisons are shown for the two codes. The lift and drag curves compare well for the cruise configuration up to 10deg angle of attack but not as well for the other two configurations. The drag polars compare reasonably well for all three configurations. The surface pressure contours compare well for some of the conditions modeled but not as well for others.

  3. CFD Analyses for Water-Air Flow With the Euler-Euler Two-Phase Model in the Fluent4 CFD Code

    International Nuclear Information System (INIS)

    Miettinen, Jaakko; Schmidt, Holger

    2002-01-01

    Framatome ANP develops a new boiling water reactor called SWR 1000. For the case of a hypothetical core melt accident it is designed in such a way that the core melt is retained in the Reactor Pressure Vessel (RPV) at low pressure owing to cooling of the RPV exterior and high reliable depressurization devices. Framatome ANP performs - in co-operation with VTT - tests to quantify the safety margins of the exterior cooling concept for the SWR 1000, for determining the limits to avoid the critical heat fluxes (CHFs). The three step procedure has been set up to investigate the phenomenon: 1. Water-air study for a 1:10 scaled global model, with the aim to investigate the global flow conditions 2. Water-air study for a 1:10 scaled, 10 % sector model, with the aim to find a flow sector with almost similar flow conditions as in the global model. 3. Final CHF experiments for a 1:1-scaled, 10 % sector., the boarders of this model have been selected based on the first two steps. The instrumentation for the water/air experiments included velocity profiles, the vertically averaged average void fraction and void fraction profiles in selected positions. The experimental results from the air-water experiments have been analyzed at VTT using the Fluent-4.5.2 code with its Eulerian multiphase flow modeling capability. The aim of the calculations was to learn how to model complex two-phase flow conditions. The structural mesh required by Fluent-4 is a strong limitation in the complex geometry, but modeling of the 1/4 sector from the facility was possible, when the GAMBIT pre-processor was used for the mesh generation. The experiments were analyzed with the 150 x 150 x 18 grid for the geometry. In the analysis the fluid viscosity was the main dials for adjusting the vertical liquid velocity profiles and the bubble diameter for adjusting the phase separation. The viscosity ranged between 1 to 10000 times the molecular viscosity, and bubble diameter between 3 to 100 mm, when the

  4. Biomimetic in vitro oxidation of lapachol: a model to predict and analyse the in vivo phase I metabolism of bioactive compounds.

    Science.gov (United States)

    Niehues, Michael; Barros, Valéria Priscila; Emery, Flávio da Silva; Dias-Baruffi, Marcelo; Assis, Marilda das Dores; Lopes, Norberto Peporine

    2012-08-01

    The bioactive naphtoquinone lapachol was studied in vitro by a biomimetic model with Jacobsen catalyst (manganese(III) salen) and iodosylbenzene as oxidizing agent. Eleven oxidation derivatives were thus identified and two competitive oxidation pathways postulated. Similar to Mn(III) porphyrins, Jacobsen catalyst mainly induced the formation of para-naphtoquinone derivatives of lapachol, but also of two ortho-derivatives. The oxidation products were used to develop a GC-MS (SIM mode) method for the identification of potential phase I metabolites in vivo. Plasma analysis of Wistar rats orally administered with lapachol revealed two metabolites, α-lapachone and dehydro-α-lapachone. Hence, the biomimetic model with a manganese salen complex has evidenced its use as a valuable tool to predict and elucidate the in vivo phase I metabolism of lapachol and possibly also of other bioactive natural compounds. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  5. IMFIT Integrated Modeling Applications Supporting Experimental Analysis: Multiple Time-Slice Kinetic EFIT Reconstructions, MHD Stability Limits, and Energy and Momentum Flux Analyses

    Science.gov (United States)

    Collier, A.; Lao, L. L.; Abla, G.; Chu, M. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Guo, W.; Li, G.; Pan, C.; Ren, Q.; Park, J. M.; Bisai, N.; Srinivasan, R.; Sun, A. P.; Liu, Y.; Worrall, M.

    2010-11-01

    This presentation summarizes several useful applications provided by the IMFIT integrated modeling framework to support DIII-D and EAST research. IMFIT is based on Python and utilizes modular task-flow architecture with a central manager and extensive GUI support to coordinate tasks among component modules. The kinetic-EFIT application allows multiple time-slice reconstructions by fetching pressure profile data directly from MDS+ or from ONETWO or PTRANSP. The stability application analyzes a given reference equilibrium for stability limits by performing parameter perturbation studies with MHD codes such as DCON, GATO, ELITE, or PEST3. The transport task includes construction of experimental energy and momentum fluxes from profile analysis and comparison against theoretical models such as MMM95, GLF23, or TGLF.

  6. The use of stored carbon reserves in growth of temperate tree roots and leaf buds: Analyses using radiocarbon measurements and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gaudinski, J.B.; Torn, M.S.; Riley, W.J.; Swanston, C.; Trumbore, S.E.; Joslin, J.D.; Majdi, H.; Dawson, T.E.; Hanson, P.J.

    2009-02-01

    Characterizing the use of carbon (C) reserves in trees is important for understanding regional and global C cycles, stress responses, asynchrony between photosynthetic activity and growth demand, and isotopic exchanges in studies of tree physiology and ecosystem C cycling. Using an inadvertent, whole-ecosystem radiocarbon ({sup 14}C) release in a temperate deciduous oak forest and numerical modeling, we estimated that the mean age of stored C used to grow both leaf buds and new roots is 0.7 years and about 55% of new-root growth annually comes from stored C. Therefore, the calculated mean age of C used to grow new-root tissue is {approx}0.4 years. In short, new roots contain a lot of stored C but it is young in age. Additionally, the type of structure used to model stored C input is important. Model structures that did not include storage, or that assumed stored and new C mixed well (within root or shoot tissues) before being used for root growth, did not fit the data nearly as well as when a distinct storage pool was used. Consistent with these whole-ecosystem labeling results, the mean age of C in new-root tissues determined using 'bomb-{sup 14}C' in three additional forest sites in North America and Europe (one deciduous, two coniferous) was less than 1-2 years. The effect of stored reserves on estimated ages of fine roots is unlikely to be large in most natural abundance isotope studies. However, models of root C dynamics should take stored reserves into account, particularly for pulse-labeling studies and fast-cycling roots (<1 years).

  7. Load management of smart household appliances and electric vehicles. A model-based analysis; Lastmanagement mit intelligenten Haushaltsgeraeten und Elektrofahrzeugen. Eine modellgestuetzte Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Kaschub, Thomas; Paetz, Alexandra-Gwyn; Jochem, Patrick; Fichtner, Wolf [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Institut fuer Industriebetriebslehre und Industrielle Produktion (IIP)

    2012-07-01

    We analyze the load-shifting potentials of households with automatic demand management based on dynamic pricing. Therefore, we developed an optimizing model with 1000 households, equipped with smart appliances and electric vehicles. The results show that the load-shifting potentials of household appliances are relatively low compared to those of electric vehicles. However, their impact is remarkable - especially if all households were equipped with smart appliances. (orig.)

  8. Proposing a Model for Analysing Relationship between Social Anxiety and Body Dysmorphic Disorder: Mediating Role of Fear of Positive and Negative Evaluation

    Directory of Open Access Journals (Sweden)

    Nasim Damercheli

    2017-02-01

    Full Text Available This research was aimed at determining the relationship between the social anxiety and the body dysmorphic disorder with mediation of fear of positive and negative evaluation. The research method was descriptive and had correlational pattern in which the structural equation modelling was utilized. The research community included the female bachelor and master students, being studied at Imam Khomeini International University, Qazvin, Iran, in 2015-2016 academic year. In this research, 1000 students, selected based on clustering random sampling, have answered the questionnaires and then 280 students were selected as the final samples based on the purposive sampling. The research tools composed of body dysmorphic metacognitive evaluation, social phobia inventory, Leary’s brief version of fear of negative evaluation, and fear of positive evaluation. The data of subjects were analyzed using path analysis, confirmatory factor analysis, measurement model test and structural model test. The results stated that the fear of positive and negative evaluation together mediates the relationship between the social anxiety and the body dysmorphic disorder. In addition, the direct effect of social anxiety on the fear of positive evaluation, on the fear of negative evaluation and on the body dysmorphic disorder was affirmed. Therefore, the interventions that target the fear of positive and negative evaluation as the central components of social anxiety can help the prevention of growth in body dysmorphic disorder

  9. Evaluation on the fretting abrasion of heat-transfer tubes of the integrated IHX/primary sodium pump. 1. Workrate analyses model

    International Nuclear Information System (INIS)

    Kisohara, Naoyuki

    2002-05-01

    The cost minimization of commercialized FBR plant systems requires the integration of an intermediate-heat-exchanger (IHX) and a primary sodium mechanical pump into one component. The pump is installed in the center of the integrated component and heat transfer tubes surround the pump. Primary sodium flows down inside the heat transfer tubes and secondary sodium flows up outside the tubes in a zigzag. Therefore, the pump rotation and sodium flow induce the vibration of heat transfer tubes and it leads the tubes to fretting wearing against support plates. Then the tube wearing must be evaluated to confirm its integrity during the plant life span (60 years). However, the knowledge of the pump rotation influence on tube wearing is not sufficiently acquired because the integrated component is a new concept in JNC. To evaluate the tube fretting wearing ratio due to the pump rotation, a new calculation model of FINAS was composed. In the first place, the beam vibration analysis model of a pump shaft, shells, tube bundle etc. of the integrated component reveals its properties such as frequency, amplitude and vibration mode. In the second place, based on the above mentioned vibration analysis, the frequency and amplitude of abrasion between the tubes and support plates can be obtained by a contact analysis model of FINAS. Eventually, this calculation shows that the tube wearing will not affect the tube integrity during the plant life time. However further evaluation by more detailed analysis and abrasion tests are needed to obtain more accurate results. (author)

  10. Theories of the deep: combining salience and network analyses to produce mental model visualizations of a coastal British Columbia food web

    Directory of Open Access Journals (Sweden)

    Jordan Levine

    2015-12-01

    Full Text Available Arriving at shared mental models among multiple stakeholder groups can be crucial for successful management of contested social-ecological systems (SES. Academia can help by first eliciting stakeholders' initial, often tacit, beliefs about a SES, and representing them in useful ways. We demonstrate a new recombination of techniques for this purpose, focusing specifically on tacit beliefs about food webs. Our approach combines freelisting and sorting techniques, salience analysis, and ultimately network analysis, to produce accessible visualizations of aggregate mental models that can then be used to facilitate discussion or generate further hypotheses about cognitive drivers of conflict. The case study we draw upon to demonstrate this technique is Clayoquot Sound UNESCO Biosphere Reserve, on the west coast of British Columbia, Canada. There, an immanent upsurge in the sea otter (Enhydra lutris population, which competes with humans for shellfish, has produced tension among government managers, and both First Nations and non-First Nations residents. Our approach helps explain this tension by visually highlighting which trophic relationships appear most cognitively salient among the lay public. We also include speculative representations of models held by managers, and pairs of contrasting demographic subgroups, to further demonstrate potential uses of the method.

  11. Comparative analyses of B{yields}K{sub 2}{sup *}l{sup +}l{sup -} in the standard model and new physics scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Li, Run-Hui [Institute of High Energy Physics, Beijing (China); Yonsei Univ., Seoul (Korea, Republic of). Dept. of Physics and IPAP; Lue, Cai-Dian [Institute of High Energy Physics, Beijing (China); Wang, Wei [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-12-15

    We analyze the B {yields} K{sub 2}{sup *}({yields} K{pi})l{sup +}l{sup -} (with l=e,{mu},{tau}) decay in the standard model and two new physics scenarios: vector-like quark model and family non-universal Z{sup '} model. We derive the differential angular distributions of the quasi-four-body decay, using the recently calculated form factors in the perturbative QCD approach. Branching ratios, polarizations, forward-backward asymmetries and transversity amplitudes are predicted, from which we find a promising prospective to observe this channel on the future experiment. We also update the constraints on effective Wilson coefficients and/or free parameters in these two new physics scenarios by making use of the experimental data of B{yields}K{sup *}l{sup +}l{sup -} and b{yields}sl{sup +}l{sup -}. Their impact on B{yields}K{sub 2}{sup *}l{sup +}l{sup -} is subsequently explored and in particular the zero-crossing point for the forward-backward asymmetry in these new physics scenarios can sizably deviate from the SM scenario. In addition we also generalize the analysis to a similar mode B{sub s}{yields}f{sup '}{sub 2}(1525)({yields}K{sup +}K{sup -})l{sup +}l{sup -}. (orig.)

  12. Pairing Coral Geochemical Analyses with an Ecosystem Services Model to Assess Drivers and Impacts of Sediment Delivery within Micronesia's Largest Estuary, Ngeremeduu Bay

    Science.gov (United States)

    Lewis, S.; Dunbar, R. B.; Mucciarone, D.; Barkdull, M.

    2017-12-01

    Scientific tools assessing impacts to watershed and coastal ecosystem services, like those from land-use land conversion (LULC), are critical for sustainable land management strategies. Small island nations are particularly vulnerable to LULC threats, especially sediment delivery, given their small spatial size and reliance on natural resources. In the Republic of Palau, a small Pacific island country, three major land-use activities—construction, fires, and agriculture— have increased sediment delivery to important estuarine and coastal habitats (i.e., rivers, mangroves, coral reefs) over the past 30 years. This project examines the predictive capacity of an ecosystem services model, Natural Capital Project's InVEST, for sediment delivery using historic land-use and coral geochemical analysis. These refined model projections are used to assess ecosystem services tradeoffs under different future land development and management scenarios. Coral cores (20-41cm in length) were sampled along a high-to-low sedimentation gradient (i.e., near major rivers (high-impact) and ocean (low-impact)) in Micronesia's largest estuary, Ngeremeduu Bay. Isotopic indicators of seasonality (δ18O and δ13C values (% VPDB)) were used to construct the age model for each core. Barium, Manganese, and Yttrium were used as trace metal proxies for sedimentation and measured in each core using a laser ablation ICP-MS. Finally, the Natural Capital Project's InVEST sediment delivery model was paired with Geospatial data to examine the drivers of sediment delivery (i.e., construction, farms and fires) within these two watersheds. A thirty-year record of trace metal to calcium ratios in coral skeletons show a peak in sedimentation during 2006 and 2007, and in 2012. These results suggest historic peaks in sediment delivery correlating to large-scale road construction and support previous findings that Ngeremeduu Bay has reached a tipping point of retaining sediment. Natural Capital's project In

  13. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  14. The construction of a decision tool to analyse local demand and local supply for GP care using a synthetic estimation model.

    Science.gov (United States)

    de Graaf-Ruizendaal, Willemijn A; de Bakker, Dinny H

    2013-10-27

    This study addresses the growing academic and policy interest in the appropriate provision of local healthcare services to the healthcare needs of local populations to increase health status and decrease healthcare costs. However, for most local areas information on the demand for primary care and supply is missing. The research goal is to examine the construction of a decision tool which enables healthcare planners to analyse local supply and demand in order to arrive at a better match. National sample-based medical record data of general practitioners (GPs) were used to predict the local demand for GP care based on local populations using a synthetic estimation technique. Next, the surplus or deficit in local GP supply were calculated using the national GP registry. Subsequently, a dynamic internet tool was built to present demand, supply and the confrontation between supply and demand regarding GP care for local areas and their surroundings in the Netherlands. Regression analysis showed a significant relationship between sociodemographic predictors of postcode areas and GP consultation time (F [14, 269,467] = 2,852.24; P 1,000 inhabitants in the Netherlands covering 97% of the total population. Confronting these estimated demand figures with the actual GP supply resulted in the average GP workload and the number of full-time equivalent (FTE) GP too much/too few for local areas to cover the demand for GP care. An estimated shortage of one FTE GP or more was prevalent in about 19% of the postcode areas with >1,000 inhabitants if the surrounding postcode areas were taken into consideration. Underserved areas were mainly found in rural regions. The constructed decision tool is freely accessible on the Internet and can be used as a starting point in the discussion on primary care service provision in local communities and it can make a considerable contribution to a primary care system which provides care when and where people need it.

  15. Evaluation of a polyetheretherketone (PEEK) titanium composite interbody spacer in an ovine lumbar interbody fusion model: biomechanical, microcomputed tomographic, and histologic analyses.

    Science.gov (United States)

    McGilvray, Kirk C; Waldorff, Erik I; Easley, Jeremiah; Seim, Howard B; Zhang, Nianli; Linovitz, Raymond J; Ryaby, James T; Puttlitz, Christian M

    2017-12-01

    The most commonly used materials used for interbody cages are titanium metal and polymer polyetheretherketone (PEEK). Both of these materials have demonstrated good biocompatibility. A major disadvantage associated with solid titanium cages is their radiopacity, limiting the postoperative monitoring of spinal fusion via standard imaging modalities. However, PEEK is radiolucent, allowing for a temporal assessment of the fusion mass by clinicians. On the other hand, PEEK is hydrophobic, which can limit bony ingrowth. Although both PEEK and titanium have demonstrated clinical success in obtaining a solid spinal fusion, innovations are being developed to improve fusion rates and to create stronger constructs using hybrid additive manufacturing approaches by incorporating both materials into a single interbody device. The purpose of this study was to examine the interbody fusion characteristic of a PEEK Titanium Composite (PTC) cage for use in lumbar fusion. Thirty-four mature female sheep underwent two-level (L 2 -L 3 and L 4 -L 5 ) interbody fusion using either a PEEK or a PTC cage (one of each per animal). Animals were sacrificed at 0, 8, 12, and 18 weeks post surgery. Post sacrifice, each surgically treated functional spinal unit underwent non-destructive kinematic testing, microcomputed tomography scanning, and histomorphometric analyses. Relative to the standard PEEK cages, the PTC constructs demonstrated significant reductions in ranges of motion and a significant increase in stiffness. These biomechanical findings were reinforced by the presence of significantly more bone at the fusion site as well as ingrowth into the porous end plates. Overall, the results indicate that PTC interbody devices could potentially lead to a more robust intervertebral fusion relative to a standard PEEK device in a clinical setting. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  16. Similar gene estimates from circular and linear standards in quantitative PCR analyses using the prokaryotic 16S rRNA gene as a model.

    Directory of Open Access Journals (Sweden)

    Athenia L Oldham

    Full Text Available Quantitative PCR (qPCR is one of the most widely used tools for quantifying absolute numbers of microbial gene copies in test samples. A recent publication showed that circular plasmid DNA standards grossly overestimated numbers of a target gene by as much as 8-fold in a eukaryotic system using quantitative PCR (qPCR analysis. Overestimation of microbial numbers is a serious concern in industrial settings where qPCR estimates form the basis for quality control or mitigation decisions. Unlike eukaryotes, bacteria and archaea most commonly have circular genomes and plasmids and therefore may not be subject to the same levels of overestimation. Therefore, the feasibility of using circular DNA plasmids as standards for 16S rRNA gene estimates was assayed using these two prokaryotic systems, with the practical advantage being rapid standard preparation for ongoing qPCR analyses. Full-length 16S rRNA gene sequences from Thermovirga lienii and Archaeoglobus fulgidus were cloned and used to generate standards for bacterial and archaeal qPCR reactions, respectively. Estimates of 16S rRNA gene copies were made based on circular and linearized DNA conformations using two genomes from each domain: Desulfovibrio vulgaris, Pseudomonas aeruginosa, Archaeoglobus fulgidus, and Methanocaldocococcus jannaschii. The ratio of estimated to predicted 16S rRNA gene copies ranged from 0.5 to 2.2-fold in bacterial systems and 0.5 to 1.0-fold in archaeal systems, demonstrating that circular plasmid standards did not lead to the gross over-estimates previously reported for eukaryotic systems.

  17. Uncertainty in Operational Atmospheric Analyses and Re-Analyses

    Science.gov (United States)

    Langland, R.; Maue, R. N.

    2016-12-01

    This talk will describe uncertainty in atmospheric analyses of wind and temperature produced by operational forecast models and in re-analysis products. Because the "true" atmospheric state cannot be precisely quantified, there is necessarily error in every atmospheric analysis, and this error can be estimated by computing differences ( variance and bias) between analysis products produced at various centers (e.g., ECMWF, NCEP, U.S Navy, etc.) that use independent data assimilation procedures, somewhat different sets of atmospheric observations and forecast