WorldWideScience

Sample records for response modeling analyses

  1. Vegetable parenting practices scale: Item response modeling analyses

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  2. A response-modeling alternative to surrogate models for support in computational analyses

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  3. Biogeochemical Responses and Feedbacks to Climate Change: Synthetic Meta-Analyses Relevant to Earth System Models

    van Gestel, Natasja; Jan van Groenigen, Kees; Osenberg, Craig; Dukes, Jeffrey; Dijkstra, Paul

    2018-03-20

    This project examined the sensitivity of carbon in land ecosystems to environmental change, focusing on carbon contained in soil, and the role of carbon-nitrogen interactions in regulating ecosystem carbon storage. The project used a combination of empirical measurements, mathematical models, and statistics to partition effects of climate change on soil into processes enhancing soil carbon and processes through which it decomposes. By synthesizing results from experiments around the world, the work provided novel insight on ecological controls and responses across broad spatial and temporal scales. The project developed new approaches in meta-analysis using principles of element mass balance and large datasets to derive metrics of ecosystem responses to environmental change. The project used meta-analysis to test how nutrients regulate responses of ecosystems to elevated CO2 and warming, in particular responses of nitrogen fixation, critical for regulating long-term C balance.

  4. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  5. Response surfaces and sensitivity analyses for an environmental model of dose calculations

    Iooss, Bertrand [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)]. E-mail: bertrand.iooss@cea.fr; Van Dorpe, Francois [CEA Cadarache, DEN/DTN/SMTM/LMTE, 13108 Saint Paul lez Durance, Cedex (France); Devictor, Nicolas [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)

    2006-10-15

    A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk.

  6. LOCO - a linearised model for analysing the onset of coolant oscillations and frequency response of boiling channels

    Romberg, T.M.

    1982-12-01

    Industrial plant such as heat exchangers and nuclear and conventional boilers are prone to coolant flow oscillations which may not be detected. In this report, a hydrodynamic model is formulated in which the one-dimensional, non-linear, partial differential equations for the conservation of mass, energy and momentum are perturbed with respect to time, linearised, and Laplace-transformed into the s-domain for frequency response analysis. A computer program has been developed to integrate numerically the resulting non-linear ordinary differential equations by finite difference methods. A sample problem demonstrates how the computer code is used to analyse the frequency response and flow stability characteristics of a heated channel

  7. Thermomechanical repository and shaft response analyses using the CAVS [Cracking And Void Strain] jointed rock model: Draft final report

    Dial, B.W.; Maxwell, D.E.

    1986-12-01

    Numerical studies of the far-field repository and near-field shaft response for a nuclear waste repository in bedded salt have been performed with the STEALTH computer code using the CAVS model for jointed rock. CAVS is a constitutive model that can simulate the slip and dilatancy of fracture planes in a jointed rock mass. The initiation and/or propagation of fractures can also be modeled when stress intensity criteria are met. The CAVS models are based on the joint models proposed with appropriate modifications for numerical simulations. The STEALTH/CAVS model has been previously used to model (1) explosive fracturing of a wellbore, (2) earthquake effects on tunnels in a generic nuclear waste repository, (3) horizontal emplacement for a nuclear waste repository in jointed granite, and (4) tunnel response in jointed rock. The use of CAVS to model far-field repository and near-field shaft response was different from previous approaches because it represented a spatially oriented approach to rock response and failure, rather than the traditional stress invariant formulation for yielding. In addition, CAVS tracked the response of the joint apertures to the time-dependent stress changes in the far-field repository and near-field shaft regions. 28 refs., 21 figs., 11 tabs

  8. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  9. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  10. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Lee, Sangchul; Yeo, In-Young; Sadeghi, Ali M.; McCarty, Gregory W.; Hively, Wells; Lang, Megan W.; Sharifi, Amir

    2018-01-01

    Water quality problems in the Chesapeake Bay Watershed (CBW) are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT) model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2concentration (590 and 850 ppm), precipitation increase (11 and 21 %), and temperature increase (2.9 and 5.0 °C), based on regional general circulation model (GCM) projections. Further, we considered the ensemble of five GCM projections (2085–2098) under the Representative Concentration Pathway (RCP) 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets) and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of  ∼  70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a greater

  11. Comparative analyses of hydrological responses of two adjacent watersheds to climate variability and change using the SWAT model

    Lee, Sangchul; Yeo, In-Young; Sadeghi, Ali M.; McCarty, Gregory W.; Hively, Wells D.; Lang, Megan W.; Sharifi, Amir

    2018-01-01

    Water quality problems in the Chesapeake Bay Watershed (CBW) are expected to be exacerbated by climate variability and change. However, climate impacts on agricultural lands and resultant nutrient loads into surface water resources are largely unknown. This study evaluated the impacts of climate variability and change on two adjacent watersheds in the Coastal Plain of the CBW, using the Soil and Water Assessment Tool (SWAT) model. We prepared six climate sensitivity scenarios to assess the individual impacts of variations in CO2 concentration (590 and 850 ppm), precipitation increase (11 and 21 %), and temperature increase (2.9 and 5.0 °C), based on regional general circulation model (GCM) projections. Further, we considered the ensemble of five GCM projections (2085-2098) under the Representative Concentration Pathway (RCP) 8.5 scenario to evaluate simultaneous changes in CO2, precipitation, and temperature. Using SWAT model simulations from 2001 to 2014 as a baseline scenario, predicted hydrologic outputs (water and nitrate budgets) and crop growth were analyzed. Compared to the baseline scenario, a precipitation increase of 21 % and elevated CO2 concentration of 850 ppm significantly increased streamflow and nitrate loads by 50 and 52 %, respectively, while a temperature increase of 5.0 °C reduced streamflow and nitrate loads by 12 and 13 %, respectively. Crop biomass increased with elevated CO2 concentrations due to enhanced radiation- and water-use efficiency, while it decreased with precipitation and temperature increases. Over the GCM ensemble mean, annual streamflow and nitrate loads showed an increase of ˜ 70 % relative to the baseline scenario, due to elevated CO2 concentrations and precipitation increase. Different hydrological responses to climate change were observed from the two watersheds, due to contrasting land use and soil characteristics. The watershed with a larger percent of croplands demonstrated a greater increased rate of 5.2 kg N ha-1 in

  12. Response surface use in safety analyses

    Prosek, A.

    1999-01-01

    When thousands of complex computer code runs related to nuclear safety are needed for statistical analysis, the response surface is used to replace the computer code. The main purpose of the study was to develop and demonstrate a tool called optimal statistical estimator (OSE) intended for response surface generation of complex and non-linear phenomena. The performance of optimal statistical estimator was tested by the results of 59 different RELAP5/MOD3.2 code calculations of the small-break loss-of-coolant accident in a two loop pressurized water reactor. The results showed that OSE adequately predicted the response surface for the peak cladding temperature. Some good characteristic of the OSE like monotonic function between two neighbor points and independence on the number of output parameters suggest that OSE can be used for response surface generation of any safety or system parameter in the thermal-hydraulic safety analyses.(author)

  13. Analyses of demand response in Denmark

    Moeller Andersen, F.; Grenaa Jensen, S.; Larsen, Helge V.; Meibom, P.; Ravn, H.; Skytte, K.; Togeby, M.

    2006-10-01

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  14. Analysing workplace violence towards health care staff in public hospitals using alternative ordered response models: the case of north-eastern Turkey.

    Çelik, Ali Kemal; Oktay, Erkan; Çebi, Kübranur

    2017-09-01

    The main objective of this article is to determine key factors that may have a significant effect on the verbal abuse, emotional abuse and physical assault of health care workers in north-eastern Turkey. A self-administered survey was completed by 450 health care workers in three well-established hospitals in Erzurum, Turkey. Because of the discrete and ordered nature of the dependent variable of the survey, the data were analysed using four distinctive ordered response models. Results revealed that several key variables were found to be a significant determinant of workplace violence, such as the type of health institution, occupational position, weekly working hours, weekly shift hours, number of daily patient contacts, age group of the respondents, experience in the health sector, training against workplace violence and current policies of the hospitals and the Turkish Ministry of Health.

  15. Graphical models for genetic analyses

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  16. An extensible analysable system model

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...

  17. Analyses of Aircraft Responses to Atmospheric Turbulence

    Van Staveren, W.H.J.J.

    2003-01-01

    The response of aircraft to stochastic atmospheric turbulence plays an important role in aircraft-design (load calculations), Flight Control System (FCS) design and flight-simulation (handling qualities research and pilot training). In order to simulate these aircraft responses, an accurate

  18. YALINA Booster subcritical assembly modeling and analyses

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  19. Modelling and analysing oriented fibrous structures

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  20. Externalizing Behaviour for Analysing System Models

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  1. Sensitivity and uncertainty analyses for performance assessment modeling

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  2. Bayesian uncertainty analyses of probabilistic risk models

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  3. Genome wide analyses of metal responsive genes in Caenorhabditis elegans

    Michael eAschner

    2012-04-01

    Full Text Available Metals are major contaminants that influence human health. Many metals have physiologic roles, but excessive levels can be harmful. Advances in technology have made toxicogenomic analyses possible to characterize the effects of metal exposure on the entire genome. Much of what is known about cellular responses to metals has come from mammalian systems; however the use of non-mammalian species is gaining wider attention. Caenorhabditis elegans (C. elegans is a small round worm whose genome has been fully sequenced and its development from egg to adult is well characterized. It is an attractive model for high throughput screens due to its short lifespan, ease of genetic mutability, low cost and high homology with humans. Research performed in C. elegans has led to insights in apoptosis, gene expression and neurodegeneration, all of which can be altered by metal exposure. Additionally, by using worms one can potentially study how the mechanisms that underline differential responses to metals in nematodes and humans, allowing for identification of novel pathways and therapeutic targets. In this review, toxicogenomic studies performed in C. elegans exposed to various metals will be discussed, highlighting how this non-mammalian system can be utilized to study cellular processes and pathways induced by metals. Recent work focusing on neurodegeneration in Parkinson’s disease will be discussed as an example of the usefulness of genetic screens in C. elegans and the novel findings that can be produced.

  4. Proteomic Analyses of the Acute Tissue Response for Explant Rabbit Corneas and Engineered Corneal Tissue Models Following In Vitro Exposure to 1540 nm Laser Light

    Eurell, T. E; Johnson, T. E; Roach, W. P

    2005-01-01

    Two-dimensional electrophoresis and histomorphometry were used to determine if equivalent protein changes occurred within native rabbit corneas and engineered corneal tissue models following in vitro...

  5. Proteomic analyses of host and pathogen responses during bovine mastitis.

    Boehmer, Jamie L

    2011-12-01

    The pursuit of biomarkers for use as clinical screening tools, measures for early detection, disease monitoring, and as a means for assessing therapeutic responses has steadily evolved in human and veterinary medicine over the past two decades. Concurrently, advances in mass spectrometry have markedly expanded proteomic capabilities for biomarker discovery. While initial mass spectrometric biomarker discovery endeavors focused primarily on the detection of modulated proteins in human tissues and fluids, recent efforts have shifted to include proteomic analyses of biological samples from food animal species. Mastitis continues to garner attention in veterinary research due mainly to affiliated financial losses and food safety concerns over antimicrobial use, but also because there are only a limited number of efficacious mastitis treatment options. Accordingly, comparative proteomic analyses of bovine milk have emerged in recent years. Efforts to prevent agricultural-related food-borne illness have likewise fueled an interest in the proteomic evaluation of several prominent strains of bacteria, including common mastitis pathogens. The interest in establishing biomarkers of the host and pathogen responses during bovine mastitis stems largely from the need to better characterize mechanisms of the disease, to identify reliable biomarkers for use as measures of early detection and drug efficacy, and to uncover potentially novel targets for the development of alternative therapeutics. The following review focuses primarily on comparative proteomic analyses conducted on healthy versus mastitic bovine milk. However, a comparison of the host defense proteome of human and bovine milk and the proteomic analysis of common veterinary pathogens are likewise introduced.

  6. Analyses of demand response in Denmark[Electricity market

    Moeller Andersen, F.; Grenaa Jensen, S.; Larsen, Helge V.; Meibom, P.; Ravn, H.; Skytte, K.; Togeby, M.

    2006-10-15

    Due to characteristics of the power system, costs of producing electricity vary considerably over short time intervals. Yet, many consumers do not experience corresponding variations in the price they pay for consuming electricity. The topic of this report is: are consumers willing and able to respond to short-term variations in electricity prices, and if so, what is the social benefit of consumers doing so? Taking Denmark and the Nord Pool market as a case, the report focuses on what is known as short-term consumer flexibility or demand response in the electricity market. With focus on market efficiency, efficient allocation of resources and security of supply, the report describes demand response from a micro-economic perspective and provides empirical observations and case studies. The report aims at evaluating benefits from demand response. However, only elements contributing to an overall value are presented. In addition, the analyses are limited to benefits for society, and costs of obtaining demand response are not considered. (au)

  7. Seismic response analyses for reactor facilities at Savannah River

    Miller, C.A.; Costantino, C.J.; Xu, J.

    1991-01-01

    The reactor facilities at the Savannah River Plant (SRP) were designed during the 1950's. The original seismic criteria defining the input ground motion was 0.1 G with UBC [uniform building code] provisions used to evaluate structural seismic loads. Later ground motion criteria have defined the free field seismic motion with a 0.2 G ZPA [free field acceleration] and various spectral shapes. The spectral shapes have included the Housner spectra, a site specific spectra, and the US NRC [Nuclear Regulatory Commission] Reg. Guide 1.60 shape. The development of these free field seismic criteria are discussed in the paper. The more recent seismic analyses have been of the following type: fixed base response spectra, frequency independent lumped parameter soil/structure interaction (SSI), frequency dependent lumped parameter SSI, and current state of the art analyses using computer codes such as SASSI. The results from these computations consist of structural loads and floor response spectra (used for piping and equipment qualification). These results are compared in the paper and the methods used to validate the results are discussed. 14 refs., 11 figs

  8. Analysing Feature Model Changes using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  9. Analyses of transient plant response under emergency situations. 2

    Koyama, Kazuya; Hishida, Masahiko

    2000-03-01

    In order to support development of the dynamic reliability analysis program DYANA, analyses were made on the event sequences anticipated under emergency situations using the plant dynamics simulation computer code Super-COPD. In this work 9 sequences were analyzed and integrated into an input file for preparing the functions for DYANA using the analytical model and input data which developed for Super-COPD in the previous work. These sequences could not analyze in the previous work, which were categorized into the PLOHS (Protected Loss of Heat Sink) event. (author)

  10. Modelling and Analyses of Embedded Systems Design

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  11. Applications of Historical Analyses in Combat Modelling

    2011-12-01

    causes of those results [2]. Models can be classified into three descriptive types [8], according to the degree of abstraction required:  iconic ...22 ( A5 ) Hence the first bracket of Equation A3 is zero. Therefore:         022 22 22 122 22 22 2      o o o o xx xxy yy yyx...A6) Equation A5 can also be used to replace the term in Equation A6, leaving: 22 oxx     222 2 22 1 2222 2 oo xxa byyyx

  12. Radiobiological analyse based on cell cluster models

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  13. Analyses of transient plant response under emergency situations

    Koyama, Kazuya [Advanced Reactor Technology, Co. Ltd., Engineering Department, Tokyo (Japan); Shimakawa, Yoshio; Hishida, Masahiko [Mitsubishi Heavy Industry, Ltd., Reactor Core Engineering and Safety Engineering Department, Tokyo (Japan)

    1999-03-01

    In order to support development of the dynamic reliability analysis program DYANA, analyses were made on the event sequences anticipated under emergency situations using the plant dynamics simulation computer code Super-COPD. The analytical models were developed for Super-COPD such as the guard vessel, the maintenance cooling system, the sodium overflow and makeup system, etc. in order to apply the code to the simulation of the emergency situations. The input data were prepared for the analyses. About 70 sequences were analyzed, which are categorized into the following events: (1) PLOHS (Protected Loss of Heat Sink), (2) LORL (Loss of Reactor Level)-J: failure of sodium makeup by the primary sodium overflow and makeup system, (3) LORL-G : failure of primary coolant pump trip, (4) LORL-I: failure of the argon cover gas isolation, and (5) heat removal only using the ventilation system of the primary cooling system rooms. The results were integrated into an input file for preparing the functions for the neural network simulation. (author)

  14. Applications of one-dimensional models in simplified inelastic analyses

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  15. Seismic response analyses of turbine hall and electrical building of RBMK-1000 MW type NPP

    Jordanov, M.J.; Karparov, K.T.

    2003-01-01

    This paper addresses results obtained during the study of turbine hall and electrical building of RBMK-1000 MW pair units at Leningradskaya NPP (LNPP) for seismic event. The study was performed in the frame of the Coordinated Research Program of the International Atomic Agency (IAEA) on Safety of RBMK type Nuclear Power Plants (NPP) in Relation of External Events. A 3-D finite element model of Main Building Complex was developed and seismic response analyses were performed taking into account the soil-structure interaction (SSI). The standard mode superposition method was used for evaluation of dynamic response of structure in time domain. The structure was assumed surface founded at the basemat level. Seismic response analyses were carried out considering shear wave propagation pattern for the input motion. The in-structure time histories and response spectra were generated in referenced locations. Conclusions are drawn for the reliability of the structural response evaluation considering the soil-structure interaction effects. (author)

  16. Thermal Response Analyses of Spherical LPG Storage Tank

    Chen, Hsijen.; Lin, Mannhsing.; Chao, Fuyuan

    1999-02-01

    Liquefied petroleum gas (LPG) is a very important fuel and chemical feed stock as well; however, the hydrocarbon has been involved in many major fires and explosions. One of these accidents is boiling-liquid, expanding-vapor explosion (BLEVE). It is a phenomenon that results from the sudden release form confinement of a liquid at a temperature above its atmospheric-pressure boiling point. The sudden decrease in pressure results in the explosive vaporization of a fraction of the liquid and a cloud of vapor and mist with the accompanying blast effects. Most BLEVEs involve flammable liquids, and most BELEVE releases are ignited by a surrounding fire and result in a fireball. The primary objective of this paper is to develop a computer model in order to determine the thermal response of a spherical LPG tank involved in fire engulfment accidents. The assessment of the safety spacing between tanks was also discussed. (author)

  17. Seismic response Analyses of Hanaro in-chimney bracket structures

    Lee, Jae Han; Ryu, J.S.; Cho, Y.G.; Lee, H.Y.; Kim, J.B.

    1999-05-01

    The in-chimney bracket will be installed in the upper part of chimney, which holds the capsule extension pipes in upper one-third of length. For evaluating the effects on the capsules and related reactor structures, ANSYS finite element analysis model is developed and the dynamic characteristics are analyzed. The seismic response anlayses of in-chimney bracket and related reactor structures of HANARO under the design earthquake response spectrum loads of OBE (0.1 g) and SSE (0.2 g) are performed. The maximum horizontal displacements of the flow tubes are within the minimum half gaps between close flow tubes, it is expected that these displacement will not produce any contact between neighbor flow tubes. The stress values in main points of reactor structures and in-chimney bracket for the seismic loads are also within the ASME Code limits. It is also confirmed that the fatigue usage factor is much less than 1.0. So, any damage on structural integrity is not expected when an in-chimney bracket is installed to upper part of the reactor chimney. (author). 12 refs., 24 tabs., 37 figs

  18. Seismic response time history analyses for KALIMER building with a horizontal and vertical seismic isolation

    Lee, J. H.; Yoo, B.; Koo, K. H. [KAERI, Taejon (Korea, Republic of)

    2001-05-01

    The seismic response time history analyses for the lumped mass models of KALIMER reactor building with a horizontal and vertical seismic isolation are performed for Artificial Time History and Kobe earthquake. The vertical amplification by the horizontal isolation is reduced by a vertical isolation for both earthquakes. The 3% viscous damping and the vertical isolation frequency of 1.5Hz gives a reduced vertical response compared to the fixed base condition at reactor support, and the 9% viscous damping to Kobe earthquake is required to get an equivalent vertical response with a fixed base condition.

  19. Seismic response time history analyses for KALIMER building with a horizontal and vertical seismic isolation

    Lee, J. H.; Yoo, B.; Koo, K. H.

    2001-01-01

    The seismic response time history analyses for the lumped mass models of KALIMER reactor building with a horizontal and vertical seismic isolation are performed for Artificial Time History and Kobe earthquake. The vertical amplification by the horizontal isolation is reduced by a vertical isolation for both earthquakes. The 3% viscous damping and the vertical isolation frequency of 1.5Hz gives a reduced vertical response compared to the fixed base condition at reactor support, and the 9% viscous damping to Kobe earthquake is required to get an equivalent vertical response with a fixed base condition

  20. Shared dosimetry error in epidemiological dose-response analyses

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-01-01

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of 'possible' dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed

  1. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  2. Quantal Response: Nonparametric Modeling

    2017-01-01

    capture the behavior of observed phenomena. Higher-order polynomial and finite-dimensional spline basis models allow for more complicated responses as the...flexibility as these are nonparametric (not constrained to any particular functional form). These should be useful in identifying nonstandard behavior via... deviance ∆ = −2 log(Lreduced/Lfull) is defined in terms of the likelihood function L. For normal error, Lfull = 1, and based on Eq. A-2, we have log

  3. The simulation of man-machine interaction in NPPs: the system response analyser project

    Cacciabue, P.C.

    1990-01-01

    In this paper, the ongoing research at Joint Research Centre-Ispra on the simulation of man-machine interaction is reviewed with reference to the past experience of system modelling and to the advances of the technological world. These require the coalescence of mixed disciplines covering the fields of engineering, psychology and sociology. In particular, the complexity of man-machine systems with respect to safety analysis is depicted. The developments and issues in modelling humans and machines are discussed: the possibility of combining them through the System Response Analyser methodology is presented as a balanced to be applied when the objective is the study of safety of systems during abnormal sequences. The three analytical tools which constitute the body of system response analysis namely a quasi-classical simulation of the actual plant, a cognitive model of the operator activities and a driver model, are described. (author)

  4. Performing dynamic time history analyses by extension of the response spectrum method

    Hulbert, G.M.

    1983-01-01

    A method is presented to calculate the dynamic time history response of finite-element models using results from response spectrum analyses. The proposed modified time history method does not represent a new mathamatical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time hisory methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem

  5. Development of ITER 3D neutronics model and nuclear analyses

    Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.

    2007-01-01

    ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)

  6. Statistical methods for analysing responses of wildlife to human disturbance.

    Haiganoush K. Preisler; Alan A. Ager; Michael J. Wisdom

    2006-01-01

    1. Off-road recreation is increasing rapidly in many areas of the world, and effects on wildlife can be highly detrimental. Consequently, we have developed methods for studying wildlife responses to off-road recreation with the use of new technologies that allow frequent and accurate monitoring of human-wildlife interactions. To illustrate these methods, we studied the...

  7. A shock absorber model for structure-borne noise analyses

    Benaziz, Marouane; Nacivet, Samuel; Thouverez, Fabrice

    2015-08-01

    Shock absorbers are often responsible for undesirable structure-borne noise in cars. The early numerical prediction of this noise in the automobile development process can save time and money and yet remains a challenge for industry. In this paper, a new approach to predicting shock absorber structure-borne noise is proposed; it consists in modelling the shock absorber and including the main nonlinear phenomena responsible for discontinuities in the response. The model set forth herein features: compressible fluid behaviour, nonlinear flow rate-pressure relations, valve mechanical equations and rubber mounts. The piston, base valve and complete shock absorber model are compared with experimental results. Sensitivity of the shock absorber response is evaluated and the most important parameters are classified. The response envelope is also computed. This shock absorber model is able to accurately reproduce local nonlinear phenomena and improves our state of knowledge on potential noise sources within the shock absorber.

  8. A comparison of linear tyre models for analysing shimmy

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  9. HECTR [Hydrogen Event: Containment Transient Response] analyses of the Nevada Test Site (NTS) premixed combustion experiments

    Wong, C.C.

    1988-11-01

    The HECTR (Hydrogen Event: Containment Transient Response) computer code has been developed at Sandia National Laboratories to predict the transient pressure and temperature responses within reactor containments for hypothetical accidents involving the transport and combustion of hydrogen. Although HECTR was designed primarily to investigate these phenomena in LWRs, it may also be used to analyze hydrogen transport and combustion experiments as well. It is in this manner that HECTR is assessed and empirical correlations, such as the combustion completeness and flame speed correlations for the hydrogen combustion model, if necessary, are upgraded. In this report, we present HECTR analyses of the large-scale premixed hydrogen combustion experiments at the Nevada Test Site (NTS) and comparison with the test results. The existing correlations in HECTR version 1.0, under certain conditions, have difficulty in predicting accurately the combustion completeness and burn time for the NTS experiments. By combining the combustion data obtained from the NTS experiments with other experimental data (FITS, VGES, ACUREX, and Whiteshell), a set of new and better combustion correlations was generated. HECTR prediction of the containment responses, using a single-compartment model and EPRI-provided combustion completeness and burn time, compares reasonably well against the test results. However, HECTR prediction of the containment responses using a multicompartment model does not compare well with the test results. This discrepancy shows the deficiency of the homogeneous burning model used in HECTR. To overcome this deficiency, a flame propagation model is highly recommended. 16 refs., 84 figs., 5 tabs

  10. Experimental benchmark for piping system dynamic response analyses

    Schott, G.A.; Mallett, R.H.

    1981-01-01

    The scope and status of a piping system dynamics test program are described. A 0.20-m nominal diameter test piping specimen is designed to be representative of main heat transport system piping of LMFBR plants. Attention is given to representing piping restraints. Applied loadings consider component-induced vibration as well as seismic excitation. The principal objective of the program is to provide a benchmark for verification of piping design methods by correlation of predicted and measured responses. Pre-test analysis results and correlation methods are discussed. 3 refs

  11. Experimental benchmark for piping system dynamic-response analyses

    1981-01-01

    This paper describes the scope and status of a piping system dynamics test program. A 0.20 m(8 in.) nominal diameter test piping specimen is designed to be representative of main heat transport system piping of LMFBR plants. Particular attention is given to representing piping restraints. Applied loadings consider component-induced vibration as well as seismic excitation. The principal objective of the program is to provide a benchmark for verification of piping design methods by correlation of predicted and measured responses. Pre-test analysis results and correlation methods are discussed

  12. Analysing the temporal dynamics of model performance for hydrological models

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  13. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  14. Use of flow models to analyse loss of coolant accidents

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  15. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  16. Analysing task design and students' responses to context-based problems through different analytical frameworks

    Broman, Karolina; Bernholt, Sascha; Parchmann, Ilka

    2015-05-01

    Background:Context-based learning approaches are used to enhance students' interest in, and knowledge about, science. According to different empirical studies, students' interest is improved by applying these more non-conventional approaches, while effects on learning outcomes are less coherent. Hence, further insights are needed into the structure of context-based problems in comparison to traditional problems, and into students' problem-solving strategies. Therefore, a suitable framework is necessary, both for the analysis of tasks and strategies. Purpose:The aim of this paper is to explore traditional and context-based tasks as well as students' responses to exemplary tasks to identify a suitable framework for future design and analyses of context-based problems. The paper discusses different established frameworks and applies the Higher-Order Cognitive Skills/Lower-Order Cognitive Skills (HOCS/LOCS) taxonomy and the Model of Hierarchical Complexity in Chemistry (MHC-C) to analyse traditional tasks and students' responses. Sample:Upper secondary students (n=236) at the Natural Science Programme, i.e. possible future scientists, are investigated to explore learning outcomes when they solve chemistry tasks, both more conventional as well as context-based chemistry problems. Design and methods:A typical chemistry examination test has been analysed, first the test items in themselves (n=36), and thereafter 236 students' responses to one representative context-based problem. Content analysis using HOCS/LOCS and MHC-C frameworks has been applied to analyse both quantitative and qualitative data, allowing us to describe different problem-solving strategies. Results:The empirical results show that both frameworks are suitable to identify students' strategies, mainly focusing on recall of memorized facts when solving chemistry test items. Almost all test items were also assessing lower order thinking. The combination of frameworks with the chemistry syllabus has been

  17. Item Response Theory Analyses of the Cambridge Face Memory Test (CFMT)

    Cho, Sun-Joo; Wilmer, Jeremy; Herzmann, Grit; McGugin, Rankin; Fiset, Daniel; Van Gulick, Ana E.; Ryan, Katie; Gauthier, Isabel

    2014-01-01

    We evaluated the psychometric properties of the Cambridge face memory test (CFMT; Duchaine & Nakayama, 2006). First, we assessed the dimensionality of the test with a bi-factor exploratory factor analysis (EFA). This EFA analysis revealed a general factor and three specific factors clustered by targets of CFMT. However, the three specific factors appeared to be minor factors that can be ignored. Second, we fit a unidimensional item response model. This item response model showed that the CFMT items could discriminate individuals at different ability levels and covered a wide range of the ability continuum. We found the CFMT to be particularly precise for a wide range of ability levels. Third, we implemented item response theory (IRT) differential item functioning (DIF) analyses for each gender group and two age groups (Age ≤ 20 versus Age > 21). This DIF analysis suggested little evidence of consequential differential functioning on the CFMT for these groups, supporting the use of the test to compare older to younger, or male to female, individuals. Fourth, we tested for a gender difference on the latent facial recognition ability with an explanatory item response model. We found a significant but small gender difference on the latent ability for face recognition, which was higher for women than men by 0.184, at age mean 23.2, controlling for linear and quadratic age effects. Finally, we discuss the practical considerations of the use of total scores versus IRT scale scores in applications of the CFMT. PMID:25642930

  18. SVM models for analysing the headstreams of mine water inrush

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  19. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  20. Performance of neutron kinetics models for ADS transient analyses

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  1. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  2. Modelling sequentially scored item responses

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  3. Model-based Recursive Partitioning for Subgroup Analyses

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  4. Modeling hard clinical end-point data in economic analyses.

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  5. Global transcriptional, physiological and metabolite analyses of Desulfovibrio vulgaris Hildenborough responses to salt adaptation

    He, Z.; Zhou, A.; Baidoo, E.; He, Q.; Joachimiak, M. P.; Benke, P.; Phan, R.; Mukhopadhyay, A.; Hemme, C.L.; Huang, K.; Alm, E.J.; Fields, M.W.; Wall, J.; Stahl, D.; Hazen, T.C.; Keasling, J.D.; Arkin, A.P.; Zhou, J.

    2009-12-01

    The response of Desulfovibrio vulgaris Hildenborough to salt adaptation (long-term NaCl exposure) was examined by physiological, global transcriptional, and metabolite analyses. The growth of D. vulgaris was inhibited by high levels of NaCl, and the growth inhibition could be relieved by the addition of exogenous amino acids (e.g., glutamate, alanine, tryptophan) or yeast extract. Salt adaptation induced the expression of genes involved in amino acid biosynthesis and transport, electron transfer, hydrogen oxidation, and general stress responses (e.g., heat shock proteins, phage shock proteins, and oxidative stress response proteins). Genes involved in carbon metabolism, cell motility, and phage structures were repressed. Comparison of transcriptomic profiles of D. vulgaris responses to salt adaptation with those of salt shock (short-term NaCl exposure) showed some similarity as well as a significant difference. Metabolite assays showed that glutamate and alanine were accumulated under salt adaptation, suggesting that they may be used as osmoprotectants in D. vulgaris. A conceptual model is proposed to link the observed results to currently available knowledge for further understanding the mechanisms of D. vulgaris adaptation to elevated NaCl.

  6. A 1024 channel analyser of model FH 465

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  7. A non-equilibrium neutral model for analysing cultural change.

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Multi-state models: metapopulation and life history analyses

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  9. Assessing regional and interspecific variation in threshold responses of forest breeding birds through broad scale analyses.

    Yntze van der Hoek

    Full Text Available BACKGROUND: Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. METHODOLOGY/PRINCIPAL FINDINGS: We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively. In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence and 66.45% (SE = 9.15, extinction in New York, compared to 51.08% (SE = 10.60, persistence and 73.67% (SE = 5.70, extinction in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. CONCLUSIONS/SIGNIFICANCE: Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that

  10. Assessing regional and interspecific variation in threshold responses of forest breeding birds through broad scale analyses.

    van der Hoek, Yntze; Renfrew, Rosalind; Manne, Lisa L

    2013-01-01

    Identifying persistence and extinction thresholds in species-habitat relationships is a major focal point of ecological research and conservation. However, one major concern regarding the incorporation of threshold analyses in conservation is the lack of knowledge on the generality and transferability of results across species and regions. We present a multi-region, multi-species approach of modeling threshold responses, which we use to investigate whether threshold effects are similar across species and regions. We modeled local persistence and extinction dynamics of 25 forest-associated breeding birds based on detection/non-detection data, which were derived from repeated breeding bird atlases for the state of Vermont. We did not find threshold responses to be particularly well-supported, with 9 species supporting extinction thresholds and 5 supporting persistence thresholds. This contrasts with a previous study based on breeding bird atlas data from adjacent New York State, which showed that most species support persistence and extinction threshold models (15 and 22 of 25 study species respectively). In addition, species that supported a threshold model in both states had associated average threshold estimates of 61.41% (SE = 6.11, persistence) and 66.45% (SE = 9.15, extinction) in New York, compared to 51.08% (SE = 10.60, persistence) and 73.67% (SE = 5.70, extinction) in Vermont. Across species, thresholds were found at 19.45-87.96% forest cover for persistence and 50.82-91.02% for extinction dynamics. Through an approach that allows for broad-scale comparisons of threshold responses, we show that species vary in their threshold responses with regard to habitat amount, and that differences between even nearby regions can be pronounced. We present both ecological and methodological factors that may contribute to the different model results, but propose that regardless of the reasons behind these differences, our results merit a warning that

  11. Model-Based Recursive Partitioning for Subgroup Analyses.

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  12. A theoretical model for analysing gender bias in medicine

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  13. A theoretical model for analysing gender bias in medicine.

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  14. Randomized Item Response Theory Models

    Fox, Gerardus J.A.

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  15. Modelling structural systems for transient response analysis

    Melosh, R.J.

    1975-01-01

    This paper introduces and reports success of a direct means of determining the time periods in which a structural system behaves as a linear system. Numerical results are based on post fracture transient analyses of simplified nuclear piping systems. Knowledge of the linear response ranges will lead to improved analysis-test correlation and more efficient analyses. It permits direct use of data from physical tests in analysis and simplication of the analytical model and interpretation of its behavior. The paper presents a procedure for deducing linearity based on transient responses. Given the forcing functions and responses of discrete points of the system at various times, the process produces evidence of linearity and quantifies an adequate set of equations of motion. Results of use of the process with linear and nonlinear analyses of piping systems with damping illustrate its success. Results cover the application to data from mathematical system responses. The process is successfull with mathematical models. In loading ranges in which all modes are excited, eight digit accuracy of predictions are obtained from the equations of motion deduced. Small changes (less than 0.01%) in the norm of the transfer matrices are produced by manipulation errors for linear systems yielding evidence that nonlinearity is easily distinguished. Significant changes (greater than five %) are coincident with relatively large norms of the equilibrium correction vector in nonlinear analyses. The paper shows that deducing linearity and, when admissible, quantifying linear equations of motion from transient response data for piping systems can be achieved with accuracy comparable to that of response data

  16. Responsibility modelling for civil emergency planning

    Sommerville, Ian; Storer, Timothy; Lock, Russell

    2009-01-01

    This paper presents a new approach to analysing and understanding civil emergency planning based on the notion of responsibility modelling combined with HAZOPS-style analysis of information requirements. Our goal is to represent complex contingency plans so that they can be more readily understood, so that inconsistencies can be highlighted and vulnerabilities discovered. In this paper, we outline the framework for contingency planning in the United Kingdom and introduce the notion of respons...

  17. Impact of sophisticated fog spray models on accident analyses

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  18. A dialogue game for analysing group model building: framing collaborative modelling and its facilitation

    Hoppenbrouwers, S.J.B.A.; Rouwette, E.A.J.A.

    2012-01-01

    This paper concerns a specific approach to analysing and structuring operational situations in collaborative modelling. Collaborative modelling is viewed here as 'the goal-driven creation and shaping of models that are based on the principles of rational description and reasoning'. Our long term

  19. Vibration tests and analyses of the reactor building model on a small scale

    Tsuchiya, Hideo; Tanaka, Mitsuru; Ogihara, Yukio; Moriyama, Ken-ichi; Nakayama, Masaaki

    1985-01-01

    The purpose of this paper is to describe the vibration tests and the simulation analyses of the reactor building model on a small scale. The model vibration tests were performed to investigate the vibrational characteristics of the combined super-structure and to verify the computor code based on Dr. H. Tajimi's Thin Layered Element Theory, using the uniaxial shaking table (60 cm x 60 cm). The specimens consist of ground model, three structural model (prestressed concrete containment vessel, inner concrete structure, and enclosure building), a combined structural model and a combined structure-soil interaction model. These models are made of silicon-rubber, and they have a scale of 1:600. Harmonic step by step excitation of 40 gals was performed to investigate the vibrational characteristics for each structural model. The responses of the specimen to harmonic excitation were measured by optical displacement meters, and analyzed by a real time spectrum analyzer. The resonance and phase lag curves of the specimens to the shaking table were obtained respectively. As for the tests of a combined structure-soil interaction model, three predominant frequencies were observed in the resonance curves. These values were in good agreement with the analytical transfer function curves on the computer code. From the vibration tests and the simulation analyses, the silicon-rubber model test is useful for the fundamental study of structural problems. The computer code based on the Thin Element Theory can simulate well the test results. (Kobozono, M.)

  20. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  1. Demographic origins of skewed operational and adult sex ratios: perturbation analyses of two-sex models.

    Veran, Sophie; Beissinger, Steven R

    2009-02-01

    Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.

  2. A motivational model for environmentally responsible behavior.

    Tabernero, Carmen; Hernández, Bernardo

    2012-07-01

    This paper presents a study examining whether self-efficacy and intrinsic motivation are related to environmentally responsible behavior (ERB). The study analysed past environmental behavior, self-regulatory mechanisms (self-efficacy, satisfaction, goals), and intrinsic and extrinsic motivation in relation to ERBs in a sample of 156 university students. Results show that all the motivational variables studied are linked to ERB. The effects of self-efficacy on ERB are mediated by the intrinsic motivation responses of the participants. A theoretical model was created by means of path analysis, revealing the power of motivational variables to predict ERB. Structural equation modeling was used to test and fit the research model. The role of motivational variables is discussed with a view to creating adequate learning contexts and experiences to generate interest and new sensations in which self-efficacy and affective reactions play an important role.

  3. Compensatory and non-compensatory multidimensional randomized item response models

    Fox, J.P.; Entink, R.K.; Avetisyan, M.

    2014-01-01

    Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been

  4. Modelling and analysing interoperability in service compositions using COSMO

    Quartel, Dick; van Sinderen, Marten J.

    2008-01-01

    A service composition process typically involves multiple service models. These models may represent the composite and composed services from distinct perspectives, e.g. to model the role of some system that is involved in a service, and at distinct abstraction levels, e.g. to model the goal,

  5. Radiogenomics and radiotherapy response modeling

    El Naqa, Issam; Kerns, Sarah L.; Coates, James; Luo, Yi; Speers, Corey; West, Catharine M. L.; Rosenstein, Barry S.; Ten Haken, Randall K.

    2017-08-01

    Advances in patient-specific information and biotechnology have contributed to a new era of computational medicine. Radiogenomics has emerged as a new field that investigates the role of genetics in treatment response to radiation therapy. Radiation oncology is currently attempting to embrace these recent advances and add to its rich history by maintaining its prominent role as a quantitative leader in oncologic response modeling. Here, we provide an overview of radiogenomics starting with genotyping, data aggregation, and application of different modeling approaches based on modifying traditional radiobiological methods or application of advanced machine learning techniques. We highlight the current status and potential for this new field to reshape the landscape of outcome modeling in radiotherapy and drive future advances in computational oncology.

  6. Pavement Aging Model by Response Surface Modeling

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  7. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  8. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  9. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  10. Analysing Models as a Knowledge Technology in Transport Planning

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  11. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  12. Modelling of demand response and market power

    Kristoffersen, B.B.; Donslund, B.; Boerre Eriksen, P.

    2004-01-01

    Demand-side flexibility and demand response to high prices are prerequisites for the proper functioning of the Nordic power market. If the consumers are unwilling to respond to high prices, the market may fail the clearing, and this may result in unwanted forced demand disconnections. Being the TSO of Western Denmark, Eltra is responsible of both security of supply and the design of the power market within its area. On this basis, Eltra has developed a new mathematical model tool for analysing the Nordic wholesale market. The model is named MARS (MARket Simulation). The model is able to handle hydropower and thermal production, nuclear power and wind power. Production, demand and exchanges modelled on an hourly basis are new important features of the model. The model uses the same principles as Nord Pool (The Nordic Power Exchange), including the division of the Nordic countries into price areas. On the demand side, price elasticity is taken into account and described by a Cobb-Douglas function. Apart from simulating perfect competition markets, particular attention has been given to modelling imperfect market conditions, i.e. exercise of market power on the supply side. Market power is simulated by using game theory, including the Nash equilibrium concept. The paper gives a short description of the MARS model. Besides, focus is on the application of the model in order to illustrate the importance of demand response in the Nordic market. Simulations with different values of demand elasticity are compared. Calculations are carried out for perfect competition and for the situation in which market power is exercised by the large power producers in the Nordic countries (oligopoly). (au)

  13. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  14. Plasma-safety assessment model and safety analyses of ITER

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  15. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  16. A semi-parametric within-subject mixture approach to the analyses of responses and response times.

    Molenaar, Dylan; Bolsinova, Maria; Vermunt, Jeroen K

    2018-05-01

    In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach. © 2017 The British Psychological Society.

  17. Analysing earthquake slip models with the spatial prediction comparison test

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  18. Analysing earthquake slip models with the spatial prediction comparison test

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  19. Models and analyses for inertial-confinement fusion-reactor studies

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report

  20. Business models for telehealth in the US: analyses and insights

    Pereira F

    2017-02-01

    Full Text Available Francis Pereira Data Sciences and Operations, Marshall School of Business, University of Southern, Los Angeles, CA, USAAbstract: A growing shortage of medical doctors and nurses, globally, coupled with increasing life expectancy, is generating greater cost pressures on health care, in the US and globally. In this respect, telehealth can help alleviate these pressures, as well as extend medical services to underserved or unserved areas. However, its relatively slow adoption in the US, as well as in other markets, suggests the presence of barriers and challenges. The use of a business model framework helps identify the value proposition of telehealth as well as these challenges, which include identifying the right revenue model, organizational structure, and, perhaps more importantly, the stakeholders in the telehealth ecosystem. Successful and cost-effective deployment of telehealth require a redefinition of the ecosystem and a comprehensive review of all benefits and beneficiaries of such a system; hence a reassessment of all the stakeholders that could benefit from such a system, beyond the traditional patient–health provider–insurer model, and thus “who should pay” for such a system, and the driving efforts of a “keystone” player in developing this initiative would help. Keywords: telehealth, business model framework, stakeholders, ecosystem, VISOR business Model

  1. A Cyber-Attack Detection Model Based on Multivariate Analyses

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  2. Model analyses for sustainable energy supply under CO2 restrictions

    Matsuhashi, Ryuji; Ishitani, Hisashi.

    1995-01-01

    This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives

  3. Molecular responses of genetically modified maize to abiotic stresses as determined through proteomic and metabolomic analyses.

    Rafael Fonseca Benevenuto

    Full Text Available Some genetically modified (GM plants have transgenes that confer tolerance to abiotic stressors. Meanwhile, other transgenes may interact with abiotic stressors, causing pleiotropic effects that will affect the plant physiology. Thus, physiological alteration might have an impact on the product safety. However, routine risk assessment (RA analyses do not evaluate the response of GM plants exposed to different environmental conditions. Therefore, we here present a proteome profile of herbicide-tolerant maize, including the levels of phytohormones and related compounds, compared to its near-isogenic non-GM variety under drought and herbicide stresses. Twenty differentially abundant proteins were detected between GM and non-GM hybrids under different water deficiency conditions and herbicide sprays. Pathway enrichment analysis showed that most of these proteins are assigned to energetic/carbohydrate metabolic processes. Among phytohormones and related compounds, different levels of ABA, CA, JA, MeJA and SA were detected in the maize varieties and stress conditions analysed. In pathway and proteome analyses, environment was found to be the major source of variation followed by the genetic transformation factor. Nonetheless, differences were detected in the levels of JA, MeJA and CA and in the abundance of 11 proteins when comparing the GM plant and its non-GM near-isogenic variety under the same environmental conditions. Thus, these findings do support molecular studies in GM plants Risk Assessment analyses.

  4. Emergency response guide-B ECCS guideline evaluation analyses for N reactor

    Chapman, J.C.; Callow, R.A.

    1989-07-01

    INEL conducted two ECCS analyses for Westinghouse Hanford. Both analyses will assist in the evaluation of proposed changes to the N Reactor Emergency Response Guide-B (ERG-B) Emergency Core System (ECCS) guideline. The analyses were a sensitivity study for reduced-ECCS flow rates and a mechanistically determined confinement steam source for a delayed-ECCS LOCA sequence. The reduced-ECCS sensitivity study established the maximum allowable reduction in ECCS flow as a function of time after core refill for a large break loss-of-coolant accident (LOCA) sequence in the N Reactor. The maximum allowable ECCS flow reduction is defined as the maximum flow reduction for which ECCS continues to provide adequate core cooling. The delayed-ECCS analysis established the liquid and steam break flows and enthalpies during the reflood of a hot core following a delayed ECCS injection LOCA sequence. A simulation of a large, hot leg manifold break with a seven-minute ECCS injection delay was used as a representative LOCA sequence. Both analyses were perform using the RELAP5/MOD2.5 transient computer code. 13 refs., 17 figs., 3 tabs

  5. Models for transient analyses in advanced test reactors

    Gabrielli, Fabrizio

    2011-01-01

    Several strategies are developed worldwide to respond to the world's increasing demand for electricity. Modern nuclear facilities are under construction or in the planning phase. In parallel, advanced nuclear reactor concepts are being developed to achieve sustainability, minimize waste, and ensure uranium resources. To optimize the performance of components (fuels and structures) of these systems, significant efforts are under way to design new Material Test Reactors facilities in Europe which employ water as a coolant. Safety provisions and the analyses of severe accidents are key points in the determination of sound designs. In this frame, the SIMMER multiphysics code systems is a very attractive tool as it can simulate transients and phenomena within and beyond the design basis in a tightly coupled way. This thesis is primarily focused upon the extension of the SIMMER multigroup cross-sections processing scheme (based on the Bondarenko method) for a proper heterogeneity treatment in the analyses of water-cooled thermal neutron systems. Since the SIMMER code was originally developed for liquid metal-cooled fast reactors analyses, the effect of heterogeneity had been neglected. As a result, the application of the code to water-cooled systems leads to a significant overestimation of the reactivity feedbacks and in turn to non-conservative results. To treat the heterogeneity, the multigroup cross-sections should be computed by properly taking account of the resonance self-shielding effects and the fine intra-cell flux distribution in space group-wise. In this thesis, significant improvements of the SIMMER cross-section processing scheme are described. A new formulation of the background cross-section, based on the Bell and Wigner correlations, is introduced and pre-calculated reduction factors (Effective Mean Chord Lengths) are used to take proper account of the resonance self-shielding effects of non-fuel isotopes. Moreover, pre-calculated parameters are applied

  6. A Formal Model to Analyse the Firewall Configuration Errors

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  7. Theoretical modeling and experimental analyses of laminated wood composite poles

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  8. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discus...

  9. Capacity allocation in wireless communication networks - models and analyses

    Litjens, Remco

    2003-01-01

    This monograph has concentrated on capacity allocation in cellular and Wireless Local Area Networks, primarily with a network operator’s perspective. In the introduc- tory chapter, a reference model has been proposed for the extensive suite of capacity allocation mechanisms that can be applied at

  10. Seismic response and damage detection analyses of an instrumented steel moment-framed building

    Rodgers, J.E.; Celebi, M.

    2006-01-01

    The seismic performance of steel moment-framed buildings has been of particular interest since brittle fractures were discovered at the beam-column connections in a number of buildings following the M 6.7 Northridge earthquake of January 17, 1994. A case study of the seismic behavior of an extensively instrumented 13-story steel moment frame building located in the greater Los Angeles area of California is described herein. Response studies using frequency domain, joint time-frequency, system identification, and simple damage detection analyses are performed using an extensive strong motion dataset dating from 1971 to the present, supported by engineering drawings and results of postearthquake inspections. These studies show that the building's response is more complex than would be expected from its highly symmetrical geometry. The response is characterized by low damping in the fundamental mode, larger accelerations in the middle and lower stories than at the roof and base, extended periods of vibration after the cessation of strong input shaking, beating in the response, elliptical particle motion, and significant torsion during strong shaking at the top of the concrete piers which extend from the basement to the second floor. The analyses conducted indicate that the response of the structure was elastic in all recorded earthquakes to date, including Northridge. Also, several simple damage detection methods employed did not indicate any structural damage or connection fractures. The combination of a large, real structure and low instrumentation density precluded the application of many recently proposed advanced damage detection methods in this case study. Overall, however, the findings of this study are consistent with the limited code-compliant postearthquake intrusive inspections conducted after the Northridge earthquake, which found no connection fractures or other structural damage. ?? ASCE.

  11. Multivariate statistical analyses demonstrate unique host immune responses to single and dual lentiviral infection.

    Sunando Roy

    2009-10-01

    Full Text Available Feline immunodeficiency virus (FIV and human immunodeficiency virus (HIV are recently identified lentiviruses that cause progressive immune decline and ultimately death in infected cats and humans. It is of great interest to understand how to prevent immune system collapse caused by these lentiviruses. We recently described that disease caused by a virulent FIV strain in cats can be attenuated if animals are first infected with a feline immunodeficiency virus derived from a wild cougar. The detailed temporal tracking of cat immunological parameters in response to two viral infections resulted in high-dimensional datasets containing variables that exhibit strong co-variation. Initial analyses of these complex data using univariate statistical techniques did not account for interactions among immunological response variables and therefore potentially obscured significant effects between infection state and immunological parameters.Here, we apply a suite of multivariate statistical tools, including Principal Component Analysis, MANOVA and Linear Discriminant Analysis, to temporal immunological data resulting from FIV superinfection in domestic cats. We investigated the co-variation among immunological responses, the differences in immune parameters among four groups of five cats each (uninfected, single and dual infected animals, and the "immune profiles" that discriminate among them over the first four weeks following superinfection. Dual infected cats mount an immune response by 24 days post superinfection that is characterized by elevated levels of CD8 and CD25 cells and increased expression of IL4 and IFNgamma, and FAS. This profile discriminates dual infected cats from cats infected with FIV alone, which show high IL-10 and lower numbers of CD8 and CD25 cells.Multivariate statistical analyses demonstrate both the dynamic nature of the immune response to FIV single and dual infection and the development of a unique immunological profile in dual

  12. Exposure-response analyses of liraglutide 3.0 mg for weight management.

    Wilding, J P H; Overgaard, R V; Jacobsen, L V; Jensen, C B; le Roux, C W

    2016-05-01

    Liraglutide 3.0 mg, an acylated GLP-1 analogue approved for weight management, lowers body weight through decreased energy intake. We conducted exposure-response analyses to provide important information on individual responses to given drug doses, reflecting inter-individual variations in drug metabolism, absorption and excretion. We report efficacy and safety responses across a wide range of exposure levels, using data from one phase II (liraglutide doses 1.2, 1.8, 2.4 and 3.0 mg), and two phase IIIa [SCALE Obesity and Prediabetes (3.0 mg); SCALE Diabetes (1.8; 3.0 mg)] randomized, placebo-controlled trials (n = 4372). There was a clear exposure-weight loss response. Weight loss increased with greater exposure and appeared to level off at the highest exposures associated with liraglutide 3.0 mg in most individuals, but did not fully plateau in men. In individuals with overweight/obesity and comorbid type 2 diabetes, there was a clear exposure-glycated haemoglobin (HbA1c) relationship. HbA1c reduction increased with higher plasma liraglutide concentration (plateauing at ∼21 nM); however, for individuals with baseline HbA1c >8.5%, HbA1c reduction did not fully plateau. No exposure-response relationship was identified for any safety outcome, with the exception of gastrointestinal adverse events (AEs). Individuals with gallbladder AEs, acute pancreatitis or malignant/breast/benign colorectal neoplasms did not have higher liraglutide exposure compared with the overall population. These analyses support the use of liraglutide 3.0 mg for weight management in all subgroups investigated; weight loss increased with higher drug exposure, with no concomitant deterioration in safety/tolerability besides previously known gastrointestinal side effects. © 2016 John Wiley & Sons Ltd.

  13. Exposure–response analyses of liraglutide 3.0 mg for weight management

    Overgaard, R. V.; Jacobsen, L. V.; Jensen, C. B.; le Roux, C. W.

    2016-01-01

    Aims Liraglutide 3.0 mg, an acylated GLP‐1 analogue approved for weight management, lowers body weight through decreased energy intake. We conducted exposure‐response analyses to provide important information on individual responses to given drug doses, reflecting inter‐individual variations in drug metabolism, absorption and excretion. Methods We report efficacy and safety responses across a wide range of exposure levels, using data from one phase II (liraglutide doses 1.2, 1.8, 2.4 and 3.0 mg), and two phase IIIa [SCALE Obesity and Prediabetes (3.0 mg); SCALE Diabetes (1.8; 3.0 mg)] randomized, placebo‐controlled trials (n = 4372). Results There was a clear exposure–weight loss response. Weight loss increased with greater exposure and appeared to level off at the highest exposures associated with liraglutide 3.0 mg in most individuals, but did not fully plateau in men. In individuals with overweight/obesity and comorbid type 2 diabetes, there was a clear exposure–glycated haemoglobin (HbA1c) relationship. HbA1c reduction increased with higher plasma liraglutide concentration (plateauing at ∼21 nM); however, for individuals with baseline HbA1c >8.5%, HbA1c reduction did not fully plateau. No exposure–response relationship was identified for any safety outcome, with the exception of gastrointestinal adverse events (AEs). Individuals with gallbladder AEs, acute pancreatitis or malignant/breast/benign colorectal neoplasms did not have higher liraglutide exposure compared with the overall population. Conclusions These analyses support the use of liraglutide 3.0 mg for weight management in all subgroups investigated; weight loss increased with higher drug exposure, with no concomitant deterioration in safety/tolerability besides previously known gastrointestinal side effects. PMID:26833744

  14. Complex accident scenarios modelled and analysed by Stochastic Petri Nets

    Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš

    2015-01-01

    This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)

  15. Analyses of Lattice Traffic Flow Model on a Gradient Highway

    Gupta Arvind Kumar; Redhu Poonam; Sharma Sapna

    2014-01-01

    The optimal current difference lattice hydrodynamic model is extended to investigate the traffic flow dynamics on a unidirectional single lane gradient highway. The effect of slope on uphill/downhill highway is examined through linear stability analysis and shown that the slope significantly affects the stability region on the phase diagram. Using nonlinear stability analysis, the Burgers, Korteweg-deVries (KdV) and modified Korteweg-deVries (mKdV) equations are derived in stable, metastable and unstable region, respectively. The effect of reaction coefficient is examined and concluded that it plays an important role in suppressing the traffic jams on a gradient highway. The theoretical findings have been verified through numerical simulation which confirm that the slope on a gradient highway significantly influence the traffic dynamics and traffic jam can be suppressed efficiently by considering the optimal current difference effect in the new lattice model. (nuclear physics)

  16. Aggregated Wind Park Models for Analysing Power System Dynamics

    Poeller, Markus; Achilles, Sebastian [DIgSILENT GmbH, Gomaringen (Germany)

    2003-11-01

    The increasing amount of wind power generation in European power systems requires stability analysis considering interaction between wind-farms and transmission systems. Dynamics introduced by dispersed wind generators at the distribution level can usually be neglected. However, large on- and offshore wind farms have a considerable influence to power system dynamics and must definitely be considered for analyzing power system dynamics. Compared to conventional power stations, wind power plants consist of a large number of generators of small size. Therefore, representing every wind generator individually increases the calculation time of dynamic simulations considerably. Therefore, model aggregation techniques should be applied for reducing calculation times. This paper presents aggregated models for wind parks consisting of fixed or variable speed wind generators.

  17. A simulation model for analysing brain structure deformations

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  18. A Box-Cox normal model for response times.

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  19. Sequence Modeling for Analysing Student Interaction with Educational Systems

    Hansen, Christian; Hansen, Casper; Hjuler, Niklas Oskar Daniel

    2017-01-01

    as exhibiting unproductive student behaviour. Based on our results this student representation is promising, especially for educational systems offering many different learning usages, and offers an alternative to common approaches like modelling student behaviour as a single Markov chain often done......The analysis of log data generated by online educational systems is an important task for improving the systems, and furthering our knowledge of how students learn. This paper uses previously unseen log data from Edulab, the largest provider of digital learning for mathematics in Denmark...

  20. On model-independent analyses of elastic hadron scattering

    Avila, R.F.; Campos, S.D.; Menon, M.J.; Montanha, J.

    2007-01-01

    By means of an almost model-independent parametrization for the elastic hadron-hadron amplitude, as a function of the energy and the momentum transfer, we obtain good descriptions of the physical quantities that characterize elastic proton-proton and antiproton-proton scattering (total cross section, r parameter and differential cross section). The parametrization is inferred on empirical grounds and selected according to high energy theorems and limits from axiomatic quantum field theory. Based on the predictive character of the approach we present predictions for the above physical quantities at the Brookhaven RHIC, Fermilab Tevatron and CERN LHC energies. (author)

  1. Using complexity theory to analyse the organisational response to resurgent tuberculosis across London.

    Trenholm, Susan; Ferlie, Ewan

    2013-09-01

    We employ complexity theory to analyse the English National Health Service (NHS)'s organisational response to resurgent tuberculosis across London. Tennison (2002) suggests that complexity theory could fruitfully explore a healthcare system's response to this complex and emergent phenomenon: we explore this claim here. We also bring in established New Public Management principles to enhance our empirical analysis, which is based on data collected between late 2009 and mid-2011. We find that the operation of complexity theory based features, especially self-organisation, are significantly impacted by the macro context of a New Public Management-based regime which values control, measurement and risk management more than innovation, flexibility and lateral system building. We finally explore limitations and suggest perspectives for further research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Biofuel market and carbon modeling to analyse French biofuel policy

    Bernard, F.; Prieur, A.

    2007-01-01

    In order to comply with European Union objectives, France has set up an ambitious biofuel plan. This plan is evaluated on the basis of two criteria: tax exemption on fossil fuels and greenhouse gases (GHG) emission savings. An economic marginal analysis and a life cycle assessment (LCA) are provided using a coupling procedure between a partial agro-industrial equilibrium model and an oil refining optimization model. Thus, we determine the minimum tax exemption needed to place on the market a targeted quantity of biofuel by deducting the biofuel long-run marginal revenue of refiners from the agro-industrial marginal cost of biofuel production. With a clear view of the refiner's economic choices, total pollutant emissions along the biofuel production chains are quantified and used to feed an LCA. The French biofuel plan is evaluated for 2008, 2010 and 2012 using prospective scenarios. Results suggest that biofuel competitiveness depends on crude oil prices and demand for petroleum products and consequently these parameters should be taken into account by authorities to modulate biofuel tax exemption. LCA results show that biofuel production and use, from 'seed to wheel', would facilitate the French Government's compliance with its 'Plan Climat' objectives by reducing up to 5% GHG emissions in the French road transport sector by 2010

  3. The Shortened Raven Standard Progressive Matrices: Item Response Theory-Based Psychometric Analyses and Normative Data

    Van der Elst, Wim; Ouwehand, Carolijn; van Rijn, Peter; Lee, Nikki; Van Boxtel, Martin; Jolles, Jelle

    2013-01-01

    The purpose of the present study was to evaluate the psychometric properties of a shortened version of the Raven Standard Progressive Matrices (SPM) under an item response theory framework (the one- and two-parameter logistic models). The shortened Raven SPM was administered to N = 453 cognitively healthy adults aged between 24 and 83 years. The…

  4. Mediational Analyses of the Effects of Responsive Teaching on the Developmental Functioning of Preschool Children with Disabilities

    Karaaslan, Ozcan; Mahoney, Gerald

    2015-01-01

    Mediational analyses were conducted with data from two small randomized control trials of the Responsive Teaching (RT) parent-mediated developmental intervention which used nearly identical intervention and control procedures. The purpose of these analyses was to determine whether or how the changes in maternal responsiveness and children's…

  5. How sex- and age-disaggregated data and gender and generational analyses can improve humanitarian response.

    Mazurana, Dyan; Benelli, Prisca; Walker, Peter

    2013-07-01

    Humanitarian aid remains largely driven by anecdote rather than by evidence. The contemporary humanitarian system has significant weaknesses with regard to data collection, analysis, and action at all stages of response to crises involving armed conflict or natural disaster. This paper argues that humanitarian actors can best determine and respond to vulnerabilities and needs if they use sex- and age-disaggregated data (SADD) and gender and generational analyses to help shape their assessments of crises-affected populations. Through case studies, the paper shows how gaps in information on sex and age limit the effectiveness of humanitarian response in all phases of a crisis. The case studies serve to show how proper collection, use, and analysis of SADD enable operational agencies to deliver assistance more effectively and efficiently. The evidence suggests that the employment of SADD and gender and generational analyses assists in saving lives and livelihoods in a crisis. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  6. Glycomic analyses of mouse models of congenital muscular dystrophy.

    Stalnaker, Stephanie H; Aoki, Kazuhiro; Lim, Jae-Min; Porterfield, Mindy; Liu, Mian; Satz, Jakob S; Buskirk, Sean; Xiong, Yufang; Zhang, Peng; Campbell, Kevin P; Hu, Huaiyu; Live, David; Tiemeyer, Michael; Wells, Lance

    2011-06-17

    Dystroglycanopathies are a subset of congenital muscular dystrophies wherein α-dystroglycan (α-DG) is hypoglycosylated. α-DG is an extensively O-glycosylated extracellular matrix-binding protein and a key component of the dystrophin-glycoprotein complex. Previous studies have shown α-DG to be post-translationally modified by both O-GalNAc- and O-mannose-initiated glycan structures. Mutations in defined or putative glycosyltransferase genes involved in O-mannosylation are associated with a loss of ligand-binding activity of α-DG and are causal for various forms of congenital muscular dystrophy. In this study, we sought to perform glycomic analysis on brain O-linked glycan structures released from proteins of three different knock-out mouse models associated with O-mannosylation (POMGnT1, LARGE (Myd), and DAG1(-/-)). Using mass spectrometry approaches, we were able to identify nine O-mannose-initiated and 25 O-GalNAc-initiated glycan structures in wild-type littermate control mouse brains. Through our analysis, we were able to confirm that POMGnT1 is essential for the extension of all observed O-mannose glycan structures with β1,2-linked GlcNAc. Loss of LARGE expression in the Myd mouse had no observable effect on the O-mannose-initiated glycan structures characterized here. Interestingly, we also determined that similar amounts of O-mannose-initiated glycan structures are present on brain proteins from α-DG-lacking mice (DAG1) compared with wild-type mice, indicating that there must be additional proteins that are O-mannosylated in the mammalian brain. Our findings illustrate that classical β1,2-elongation and β1,6-GlcNAc branching of O-mannose glycan structures are dependent upon the POMGnT1 enzyme and that O-mannosylation is not limited solely to α-DG in the brain.

  7. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  8. A HIERARCHICAL SET OF MODELS FOR SPECIES RESPONSE ANALYSIS

    HUISMAN, J; OLFF, H; FRESCO, LFM

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  9. A hierarchical set of models for species response analysis

    Huisman, J.; Olff, H.; Fresco, L.F.M.

    1993-01-01

    Variation in the abundance of species in space and/or time can be caused by a wide range of underlying processes. Before such causes can be analysed we need simple mathematical models which can describe the observed response patterns. For this purpose a hierarchical set of models is presented. These

  10. Rhythmic entrainment source separation: Optimizing analyses of neural responses to rhythmic sensory stimulation.

    Cohen, Michael X; Gulbinaite, Rasa

    2017-02-15

    Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Model of corporate social responsability in food tourism

    Naalyan Gendzheva

    2014-01-01

    The paper examines various aspects of the specificity of the postmodern trend in tourism - food tourism. Basic concepts are defined and classification of its various manifestations is proposed. Analyses are made for opportunities of responsible tourism in this area in order to achieve sustainability. In conclusion is proposed a model that creates opportunities for integrating socially responsible practices in the tourism sector through responsible food tourism.

  12. Structural Response of Submerged Air-Backed Plates by Experimental and Numerical Analyses

    Lloyd Hammond

    2000-01-01

    Full Text Available This paper presents the results of a series of small-scale underwater shock experiments that measured the structural responses of submerged, fully clamped, air-backed, steel plates to a range of high explosive charge sizes. The experimental results were subsequently used to validate a series of simulations using the coupled LS-DYNA/USA finite element/boundary element codes. The modelling exercise was complicated by a significant amount of local cavitation occurring in the fluid adjacent to the plate and difficulties in modelling the boundary conditions of the test plates. The finite element model results satisfactorily predicted the displacement-time history of the plate over a range of shock loadings although a less satisfactory correlation was achieved for the peak velocities. It is expected that the predictive capability of the finite element model will be significantly improved once hydrostatic initialisation can be fully utilised with the LS-DYNA/USA software.

  13. Spectral response, dark current, and noise analyses in resonant tunneling quantum dot infrared photodetectors.

    Jahromi, Hamed Dehdashti; Mahmoodi, Ali; Sheikhi, Mohammad Hossein; Zarifkar, Abbas

    2016-10-20

    Reduction of dark current at high-temperature operation is a great challenge in conventional quantum dot infrared photodetectors, as the rate of thermal excitations resulting in the dark current increases exponentially with temperature. A resonant tunneling barrier is the best candidate for suppression of dark current, enhancement in signal-to-noise ratio, and selective extraction of different wavelength response. In this paper, we use a physical model developed by the authors recently to design a proper resonant tunneling barrier for quantum infrared photodetectors and to study and analyze the spectral response of these devices. The calculated transmission coefficient of electrons by this model and its dependency on bias voltage are in agreement with experimental results. Furthermore, based on the calculated transmission coefficient, the dark current of a quantum dot infrared photodetector with a resonant tunneling barrier is calculated and compared with the experimental data. The validity of our model is proven through this comparison. Theoretical dark current by our model shows better agreement with the experimental data and is more accurate than the previously developed model. Moreover, noise in the device is calculated. Finally, the effect of different parameters, such as temperature, size of quantum dots, and bias voltage, on the performance of the device is simulated and studied.

  14. A multichannel frequency response analyser for impedance spectroscopy on power sources

    DANIEL J. L. BRETT

    2013-06-01

    Full Text Available A low-cost multi-channel frequency response analyser (FRA has been developed based on a DAQ (data acquisition/LabVIEW interface. The system has been tested for electric and electrochemical impedance measurements. This novel association of hardware and software demonstrated performance comparable to a commercial potentiostat / FRA for passive electric circuits. The software has multichannel capabilities with minimal phase shift for 5 channels when operated below 3 kHz. When applied in active (galvanostatic mode in conjunction with a commercial electronic load (by discharging a lead acid battery at 1.5 A the performance was fit for purpose, providing electrochemical information to characterize the performance of the power source.

  15. Site response - a critical problem in soil-structure interaction analyses for embedded structures

    Seed, H.B.; Lysmer, J.

    1986-01-01

    Soil-structure interaction analyses for embedded structures must necessarily be based on a knowledge of the manner in which the soil would behave in the absence of any structure - that is on a knowledge and understanding of the spatial distribution of motions in the ground within the depth of embedment of the structure. The nature of these spatial variations is discussed and illustrated by examples of recorded motions. It is shown that both the amplitude of peak acceleration and the form of the acceleration response spectrum for earthquake motions will necessarily vary with depth and failure to take these variations into account may introduce an unwarranted degree of conservatism into the soil-structure interaction analysis procedure

  16. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  17. Earthquake response analyses of soil-structure system considering kinematic interaction

    Murakami, H.; Yokono, K.; Miura, S.; Ishii, K.

    1985-01-01

    Improvement of soil-structure interaction analysis has been one of major concerns in earthquake engineering field, especially in nuclear industries, to evaluate the safety of structure accurately under earthquake events. This research aims to develop a rational analytical tool which considers effect of the 'kinematic interaction' satisfactory with a proposed simple low-pass filter. In this paper, first the effect of the kinematic interaction is investigated based on earthquake response analysis of a reactor building using the practical design models: the spring-mass-dashpot system and the 'lattice model', in which a building and soil medium are modeled by a system of lumped masses. Next, the filter is developed based on parametrical studies with various sizes of depth and width of foundations embedded in two-layers soil, which represents more general soil condition in practical designs compared with a homogeneous soil medium. (orig.)

  18. Taking a comparative approach: analysing personality as a multivariate behavioural response across species.

    Alecia J Carter

    Full Text Available Animal personality, repeatable behaviour through time and across contexts, is ecologically and evolutionarily important as it can account for the exhibition of sub-optimal behaviours. Interspecific comparisons have been suggested as important for understanding the evolution of animal personality; however, these are seldom accomplished due, in part, to the lack of statistical tools for quantifying differences and similarities in behaviour between groups of individuals. We used nine species of closely-related coral reef fishes to investigate the usefulness of ecological community analyses for the analysis of between-species behavioural differences and behavioural heterogeneity. We first documented behavioural carryover across species by observing the fishes' behaviour and measuring their response to a threatening stimulus to quantify boldness. Bold fish spent more time away from the reef and fed more than shy fish. We then used ecological community analysis tools (canonical variate analysis, multi-response permutation procedure, and permutational analysis of multivariate dispersion and identified four 'clusters' of behaviourally similar fishes, and found that the species differ in the behavioural variation expressed; some species are more behaviourally heterogeneous than others. We found that ecological community analysis tools are easily and fruitfully applied to comparative studies of personality and encourage their use by future studies.

  19. A new method for odour impact assessment based on spatial and temporal analyses of community response

    Henshaw, P.; Nicell, J.; Sikdar, A.

    2002-01-01

    Odorous emission from stationary sources account for the majority of air pollution complaints to regulatory agencies. Sometimes regulators rely on nuisance provisions of common law to assess odour impact, which is highly subjective. The other commonly used approach, the dilution-to-threshold principle, assumes that an odour is a problem simply if detected, without regard to the fact that a segment of the population can detect the odour at concentrations below the threshold. The odour impact model (OIM) represents a significant improvement over current methods for quantifying odours by characterizing the dose-response relationship of the odour. Dispersion modelling can be used in conjunction with the OIM to estimate the probability of response in the surrounding vicinity, taking into account the local meteorological conditions. The objective of this research is to develop an objective method of assessing the impact of odorous airborne emissions. To this end, several metrics were developed to quantify the impact of an odorous stationary source on the surrounding community. These 'odour impact parameters' are: maximum concentration, maximum probability of response, footprint area, probability-weighted footprint area and the number of people responding to the odour. These impact parameters were calculated for a stationary odour source in Canada. Several remediation scenarios for reducing the odour impact were proposed and their effect on the impact parameters calculated. (author)

  20. Pushover, Response Spectrum and Time History Analyses of Safe Rooms in a Poor Performance Masonry Building

    Mazloom, M.

    2008-01-01

    The idea of safe room has been developed for decreasing the earthquake casualties in masonry buildings. The information obtained from the previous ground motions occurring in seismic zones expresses the lack of enough safety of these buildings against earthquakes. For this reason, an attempt has been made to create some safe areas inside the existing masonry buildings, which are called safe rooms. The practical method for making these safe areas is to install some prefabricated steel frames in some parts of the existing structure. These frames do not carry any service loads before an earthquake. However, if a devastating earthquake happens and the load bearing walls of the building are destroyed, some parts of the floors, which are in the safe areas, will fall on the roof of the installed frames and the occupants who have sheltered there will survive. This paper presents the performance of these frames located in a destroying three storey masonry building with favorable conclusions. In fact, the experimental pushover diagram of the safe room located at the ground-floor level of this building is compared with the analytical results and it is concluded that pushover analysis is a good method for seismic performance evaluation of safe rooms. For time history analysis the 1940 El Centro, the 2003 Bam, and the 1990 Manjil earthquake records with the maximum peak accelerations of 0.35g were utilized. Also the design spectrum of Iranian Standard No. 2800-05 for the ground kind 2 is used for response spectrum analysis. The results of time history, response spectrum and pushover analyses show that the strength and displacement capacity of the steel frames are adequate to accommodate the distortions generated by seismic loads and aftershocks properly

  1. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  2. STRESS RESPONSE STUDIES USING ANIMAL MODELS

    This presentation will provide the evidence that ozone exposure in animal models induce neuroendocrine stress response and this stress response modulates lung injury and inflammation through adrenergic and glucocorticoid receptors.

  3. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  4. Sequence and expression analyses of ethylene response factors highly expressed in latex cells from Hevea brasiliensis.

    Piyanuch Piyatrakul

    Full Text Available The AP2/ERF superfamily encodes transcription factors that play a key role in plant development and responses to abiotic and biotic stress. In Hevea brasiliensis, ERF genes have been identified by RNA sequencing. This study set out to validate the number of HbERF genes, and identify ERF genes involved in the regulation of latex cell metabolism. A comprehensive Hevea transcriptome was improved using additional RNA reads from reproductive tissues. Newly assembled contigs were annotated in the Gene Ontology database and were assigned to 3 main categories. The AP2/ERF superfamily is the third most represented compared with other transcription factor families. A comparison with genomic scaffolds led to an estimation of 114 AP2/ERF genes and 1 soloist in Hevea brasiliensis. Based on a phylogenetic analysis, functions were predicted for 26 HbERF genes. A relative transcript abundance analysis was performed by real-time RT-PCR in various tissues. Transcripts of ERFs from group I and VIII were very abundant in all tissues while those of group VII were highly accumulated in latex cells. Seven of the thirty-five ERF expression marker genes were highly expressed in latex. Subcellular localization and transactivation analyses suggested that HbERF-VII candidate genes encoded functional transcription factors.

  5. Response of sweet orange (Citrus sinensis) to 'Candidatus Liberibacter asiaticus' infection: microscopy and microarray analyses.

    Kim, Jeong-Soon; Sagaram, Uma Shankar; Burns, Jacqueline K; Li, Jian-Liang; Wang, Nian

    2009-01-01

    Citrus greening or huanglongbing (HLB) is a devastating disease of citrus. HLB is associated with the phloem-limited fastidious prokaryotic alpha-proteobacterium 'Candidatus Liberibacter spp.' In this report, we used sweet orange (Citrus sinensis) leaf tissue infected with 'Ca. Liberibacter asiaticus' and compared this with healthy controls. Investigation of the host response was examined with citrus microarray hybridization based on 33,879 expressed sequence tag sequences from several citrus species and hybrids. The microarray analysis indicated that HLB infection significantly affected expression of 624 genes whose encoded proteins were categorized according to function. The categories included genes associated with sugar metabolism, plant defense, phytohormone, and cell wall metabolism, as well as 14 other gene categories. The anatomical analyses indicated that HLB bacterium infection caused phloem disruption, sucrose accumulation, and plugged sieve pores. The up-regulation of three key starch biosynthetic genes including ADP-glucose pyrophosphorylase, starch synthase, granule-bound starch synthase and starch debranching enzyme likely contributed to accumulation of starch in HLB-affected leaves. The HLB-associated phloem blockage resulted from the plugged sieve pores rather than the HLB bacterial aggregates since 'Ca. Liberibacter asiaticus' does not form aggregate in citrus. The up-regulation of pp2 gene is related to callose deposition to plug the sieve pores in HLB-affected plants.

  6. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  7. Use and misuse of temperature normalisation in meta-analyses of thermal responses of biological traits

    Dimitrios - Georgios Kontopoulos

    2018-02-01

    Full Text Available There is currently unprecedented interest in quantifying variation in thermal physiology among organisms, especially in order to understand and predict the biological impacts of climate change. A key parameter in this quantification of thermal physiology is the performance or value of a rate, across individuals or species, at a common temperature (temperature normalisation. An increasingly popular model for fitting thermal performance curves to data—the Sharpe-Schoolfield equation—can yield strongly inflated estimates of temperature-normalised rate values. These deviations occur whenever a key thermodynamic assumption of the model is violated, i.e., when the enzyme governing the performance of the rate is not fully functional at the chosen reference temperature. Using data on 1,758 thermal performance curves across a wide range of species, we identify the conditions that exacerbate this inflation. We then demonstrate that these biases can compromise tests to detect metabolic cold adaptation, which requires comparison of fitness or rate performance of different species or genotypes at some fixed low temperature. Finally, we suggest alternative methods for obtaining unbiased estimates of temperature-normalised rate values for meta-analyses of thermal performance across species in climate change impact studies.

  8. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  9. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  10. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  11. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  12. Impact analyses for negative flexural responses (hogging) in railway prestressed concrete sleepers

    Kaewunruen, S; Ishida, T; Remennikov, AM

    2016-01-01

    By nature, ballast interacts with railway concrete sleepers in order to provide bearing support to track system. Most train-track dynamic models do not consider the degradation of ballast over time. In fact, the ballast degradation causes differential settlement and impact forces acting on partial and unsupported tracks. Furthermore, localised ballast breakages underneath railseat increase the likelihood of centrebound cracks in concrete sleepers due to the unbalanced support under sleepers. This paper presents a dynamic finite element model of a standard-gauge concrete sleeper in a track system, taking into account the tensionless nature of ballast support. The finite element model was calibrated using static and dynamic responses in the past. In this paper, the effects of centre-bound ballast support on the impact behaviours of sleepers are highlighted. In addition, it is the first to demonstrate the dynamic effects of sleeper length on the dynamic design deficiency in concrete sleepers. The outcome of this study will benefit the rail maintenance criteria of track resurfacing in order to restore ballast profile and appropriate sleeper/ballast interaction. (paper)

  13. Impact analyses for negative flexural responses (hogging) in railway prestressed concrete sleepers

    Kaewunruen, S.; Ishida, T.; Remennikov, AM

    2016-09-01

    By nature, ballast interacts with railway concrete sleepers in order to provide bearing support to track system. Most train-track dynamic models do not consider the degradation of ballast over time. In fact, the ballast degradation causes differential settlement and impact forces acting on partial and unsupported tracks. Furthermore, localised ballast breakages underneath railseat increase the likelihood of centrebound cracks in concrete sleepers due to the unbalanced support under sleepers. This paper presents a dynamic finite element model of a standard-gauge concrete sleeper in a track system, taking into account the tensionless nature of ballast support. The finite element model was calibrated using static and dynamic responses in the past. In this paper, the effects of centre-bound ballast support on the impact behaviours of sleepers are highlighted. In addition, it is the first to demonstrate the dynamic effects of sleeper length on the dynamic design deficiency in concrete sleepers. The outcome of this study will benefit the rail maintenance criteria of track resurfacing in order to restore ballast profile and appropriate sleeper/ballast interaction.

  14. Response Styles in the Partial Credit Model

    Tutz, Gerhard; Schauberger, Gunther; Berger, Moritz

    2016-01-01

    In the modelling of ordinal responses in psychological measurement and survey- based research, response styles that represent specific answering patterns of respondents are typically ignored. One consequence is that estimates of item parameters can be poor and considerably biased. The focus here is on the modelling of a tendency to extreme or middle categories. An extension of the Partial Credit Model is proposed that explicitly accounts for this specific response style. In contrast to exi...

  15. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    Gilbert, R.O.; Bittner, E.A.; Essington, E.H.

    1995-01-01

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  16. Analyses and simulations in income frame regulation model for the network sector from 2007

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-01-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  17. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  18. Longitudinal analyses of correlated response efficiencies of fillet traits in Nile tilapia.

    Turra, E M; Fernandes, A F A; de Alvarenga, E R; Teixeira, E A; Alves, G F O; Manduca, L G; Murphy, T W; Silva, M A

    2018-03-01

    Recent studies with Nile tilapia have shown divergent results regarding the possibility of selecting on morphometric measurements to promote indirect genetic gains in fillet yield (FY). The use of indirect selection for fillet traits is important as these traits are only measurable after harvesting. Random regression models are a powerful tool in association studies to identify the best time point to measure and select animals. Random regression models can also be applied in a multiple trait approach to analyze indirect response to selection, which would avoid the need to sacrifice candidate fish. Therefore, the aim of this study was to investigate the genetic relationships between several body measurements, weight and fillet traits throughout the growth period and to evaluate the possibility of indirect selection for fillet traits in Nile tilapia. Data were collected from 2042 fish and was divided into two subsets. The first subset was used to estimate genetic parameters, including the permanent environmental effect for BW and body measurements (8758 records for each body measurement, as each fish was individually weighed and measured a maximum of six times). The second subset (2042 records for each trait) was used to estimate genetic correlations and heritabilities, which enabled the calculation of correlated response efficiencies between body measurements and the fillet traits. Heritability estimates across ages ranged from 0.05 to 0.5 for height, 0.02 to 0.48 for corrected length (CL), 0.05 to 0.68 for width, 0.08 to 0.57 for fillet weight (FW) and 0.12 to 0.42 for FY. All genetic correlation estimates between body measurements and FW were positive and strong (0.64 to 0.98). The estimates of genetic correlation between body measurements and FY were positive (except for CL at some ages), but weak to moderate (-0.08 to 0.68). These estimates resulted in strong and favorable correlated response efficiencies for FW and positive, but moderate for FY. These results

  19. An IEEE 802.11 EDCA Model with Support for Analysing Networks with Misbehaving Nodes

    Szott Szymon

    2010-01-01

    Full Text Available We present a novel model of IEEE 802.11 EDCA with support for analysing networks with misbehaving nodes. In particular, we consider backoff misbehaviour. Firstly, we verify the model by extensive simulation analysis and by comparing it to three other IEEE 802.11 models. The results show that our model behaves satisfactorily and outperforms other widely acknowledged models. Secondly, a comparison with simulation results in several scenarios with misbehaving nodes proves that our model performs correctly for these scenarios. The proposed model can, therefore, be considered as an original contribution to the area of EDCA models and backoff misbehaviour.

  20. Item response theory analyses of the Delis-Kaplan Executive Function System card sorting subtest.

    Spencer, Mercedes; Cho, Sun-Joo; Cutting, Laurie E

    2018-02-02

    In the current study, we examined the dimensionality of the 16-item Card Sorting subtest of the Delis-Kaplan Executive Functioning System assessment in a sample of 264 native English-speaking children between the ages of 9 and 15 years. We also tested for measurement invariance for these items across age and gender groups using item response theory (IRT). Results of the exploratory factor analysis indicated that a two-factor model that distinguished between verbal and perceptual items provided the best fit to the data. Although the items demonstrated measurement invariance across age groups, measurement invariance was violated for gender groups, with two items demonstrating differential item functioning for males and females. Multigroup analysis using all 16 items indicated that the items were more effective for individuals whose IRT scale scores were relatively high. A single-group explanatory IRT model using 14 non-differential item functioning items showed that for perceptual ability, females scored higher than males and that scores increased with age for both males and females; for verbal ability, the observed increase in scores across age differed for males and females. The implications of these findings are discussed.

  1. Stochastic Still Water Response Model

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2002-01-01

    In this study a stochastic field model for the still water loading is formulated where the statistics (mean value, standard deviation, and correlation) of the sectional forces are obtained by integration of the load field over the relevant part of the ship structure. The objective of the model is...... out that an important parameter of the stochastic cargo field model is the mean number of containers delivered by each customer.......In this study a stochastic field model for the still water loading is formulated where the statistics (mean value, standard deviation, and correlation) of the sectional forces are obtained by integration of the load field over the relevant part of the ship structure. The objective of the model...... is to establish the stochastic load field conditional on a given draft and trim of the vessel. The model contributes to a realistic modelling of the stochastic load processes to be used in a reliability evaluation of the ship hull. Emphasis is given to container vessels. The formulation of the model for obtaining...

  2. Regional analyses of labor markets and demography: a model based Norwegian example.

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  3. Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models

    Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano

    2017-07-01

    The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.

  4. Material model for non-linear finite element analyses of large concrete structures

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  5. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.

  6. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  7. Relative conservatisms of combination methods used in response spectrum analyses of nuclear piping systems

    Gupta, S.; Kustu, O.; Jhaveri, D.P.; Blume, J.A.

    1983-01-01

    The paper presents the conclusions of a comprehensive study that investigated the relative conservatisms represented by various combination techniques. Two approaches were taken for the study, producing mutually consistent results. In the first, 20 representative nuclear piping systems were systematically analyzed using the response spectrum method. The total response was obtained using nine different combination methods. One procedure, using the SRSS method for combining spatial components of response and the 10% method for combining the responses of different modes (which is currently acceptable to the U.S. NRC), was the standard for comparison. Responses computed by the other methods were normalized to this standard method. These response ratios were then used to develop cumulative frequency-distribution curves, which were used to establish the relative conservatism of the methods in a probabilistic sense. In the second approach, 30 single-degree-of-freedom (SDOF) systems that represent different modes of hypothetical piping systems and have natural frequencies varying from 1 Hz to 30 Hz, were analyzed for 276 sets of three-component recorded ground motion. A set of hypothetical systems assuming a variety of modes and frequency ranges was developed. The responses of these systems were computed from the responses of the SDOF systems by combining the spatial response components by algebraic summation and the individual mode responses by the Navy method, or combining both spatial and modal response components using the SRSS method. Probability density functions and cumulative distribution functions were developed for the ratio of the responses obtained by both methods. (orig./HP)

  8. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    effects and the one-at-a-time approach (O.A.T); and (ii), we applied Sobol's global sensitivity analysis method which is based on variance decompositions. Results illustrate that ψm (maximum sorption rate of mobile colloids), kdmc (solute desorption rate from mobile colloids), and Ks (saturated hydraulic conductivity) are the most sensitive parameters with respect to the contaminant travel time. The analyses indicate that this new module is able to simulate the colloid-facilitated contaminant transport. However, validations under laboratory conditions are needed to confirm the occurrence of the colloid transport phenomenon and to understand model prediction under non-saturated soil conditions. Future work will involve monitoring of the colloidal transport phenomenon through soil column experiments. The anticipated outcome will provide valuable information on the understanding of the dominant mechanisms responsible for colloidal transports, colloid-facilitated contaminant transport and, also, the colloid detachment/deposition processes impacts on soil hydraulic properties. References: Šimůnek, J., C. He, L. Pang, & S. A. Bradford, Colloid-Facilitated Solute Transport in Variably Saturated Porous Media: Numerical Model and Experimental Verification, Vadose Zone Journal, 2006, 5, 1035-1047 Šimůnek, J., M. Šejna, & M. Th. van Genuchten, The C-Ride Module for HYDRUS (2D/3D) Simulating Two-Dimensional Colloid-Facilitated Solute Transport in Variably-Saturated Porous Media, Version 1.0, PC Progress, Prague, Czech Republic, 45 pp., 2012.

  9. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  10. Intercomparison of fast response commercial gas analysers for nitrous oxide flux measurements under field conditions

    Rannik, Ü.; Haapanala, S.; Shurpali, N. J.; Mammarella, I.; Lind, S.; Hyvönen, N.; Peltola, O.; Zahniser, M.; Martikainen, P. J.; Vesala, T.

    2015-01-01

    Four gas analysers capable of measuring nitrous oxide (N2O) concentration at a response time necessary for eddy covariance flux measurements were operated from spring until winter 2011 over a field cultivated with reed canary grass (RCG, Phalaris arundinacea, L.), a perennial bioenergy crop in eastern Finland. The instruments were TGA100A (Campbell Scientific Inc.), CW-TILDAS-CS (Aerodyne Research Inc.), N2O / CO-23d (Los Gatos Research Inc.) and QC-TILDAS-76-CS (Aerodyne Research Inc.). The period with high emissions, lasting for about 2 weeks after fertilization in late May, was characterized by an up to 2 orders of magnitude higher emission, whereas during the rest of the campaign the N2O fluxes were small, from 0.01 to 1 nmol m-2 s-1. Two instruments, CW-TILDAS-CS and N2O / CO-23d, determined the N2O exchange with minor systematic difference throughout the campaign, when operated simultaneously. TGA100A produced the cumulatively highest N2O estimates (with 29% higher values during the period when all instruments were operational). QC-TILDAS-76-CS obtained 36% lower fluxes than CW-TILDAS-CS during the first period, including the emission episode, whereas the correspondence with other instruments during the rest of the campaign was good. The reasons for systematic differences were not identified, suggesting further need for detailed evaluation of instrument performance under field conditions with emphasis on stability, calibration and any other factors that can systematically affect the accuracy of flux measurements. The instrument CW-TILDAS-CS was characterized by the lowest noise level (with a standard deviation of around 0.12 ppb at 10 Hz sampling rate) as compared to N2O / CO-23d and QC-TILDAS-76-CS (around 0.50 ppb) and TGA100A (around 2 ppb). We identified that for all instruments except CW-TILDAS-CS the random error due to instrumental noise was an important source of uncertainty at the 30 min averaging level and the total stochastic error was frequently

  11. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...

  12. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  13. Rhythmic entrainment source separation: Optimizing analyses of neural responses to rhythmic sensory stimulation

    Cohen, M.S.; Gulbinaite, R.

    2017-01-01

    Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency

  14. Characterizing Response-Reinforcer Relations in the Natural Environment: Exploratory Matching Analyses

    Sy, Jolene R.; Borrero, John C.; Borrero, Carrie S. W.

    2010-01-01

    We assessed problem and appropriate behavior in the natural environment from a matching perspective. Problem and appropriate behavior were conceptualized as concurrently available responses, the occurrence of which was thought to be determined by the relative rates or durations of reinforcement. We also assessed whether response allocation could…

  15. Model for Managing Corporate Social Responsibility

    Tamara Vlastelica Bakić

    2015-05-01

    Full Text Available As a crossfuncional process in the organization, effective management of corporate social responsibility requires a definition of strategies, programs and an action plan that structures this process from its initiation to the measurement of end effects. Academic literature on the topic of corporate social responsibility is mainly focused on the exploration of the business case for the concept, i.e., the determination of effects of social responsibility on individual aspects of the business. Scientific research so far has shown not to have been committed to formalizing management concept in this domain to a satisfactory extent; it is for this reason that this paper attempts to present one model for managing corporate social responsibility. The model represents a contribution to the theory and business practice of corporate social responsibility, as it offers a strategic framework for systematic planning, implementation and evaluation of socially responsible activities and programs.

  16. Potential of MR histogram analyses for prediction of response to chemotherapy in patients with colorectal hepatic metastases.

    Liang, He-Yue; Huang, Ya-Qin; Yang, Zhao-Xia; Ying-Ding; Zeng, Meng-Su; Rao, Sheng-Xiang

    2016-07-01

    To determine if magnetic resonance imaging (MRI) histogram analyses can help predict response to chemotherapy in patients with colorectal hepatic metastases by using response evaluation criteria in solid tumours (RECIST1.1) as the reference standard. Standard MRI including diffusion-weighted imaging (b=0, 500 s/mm(2)) was performed before chemotherapy in 53 patients with colorectal hepatic metastases. Histograms were performed for apparent diffusion coefficient (ADC) maps, arterial, and portal venous phase images; thereafter, mean, percentiles (1st, 10th, 50th, 90th, 99th), skewness, kurtosis, and variance were generated. Quantitative histogram parameters were compared between responders (partial and complete response, n=15) and non-responders (progressive and stable disease, n=38). Receiver operator characteristics (ROC) analyses were further analyzed for the significant parameters. The mean, 1st percentile, 10th percentile, 50th percentile, 90th percentile, 99th percentile of the ADC maps were significantly lower in responding group than that in non-responding group (p=0.000-0.002) with area under the ROC curve (AUCs) of 0.76-0.82. The histogram parameters of arterial and portal venous phase showed no significant difference (p>0.05) between the two groups. Histogram-derived parameters for ADC maps seem to be a promising tool for predicting response to chemotherapy in patients with colorectal hepatic metastases. • ADC histogram analyses can potentially predict chemotherapy response in colorectal liver metastases. • Lower histogram-derived parameters (mean, percentiles) for ADC tend to have good response. • MR enhancement histogram analyses are not reliable to predict response.

  17. Present status of theories and data analyses of mathematical models for carcinogenesis

    Kai, Michiaki; Kawaguchi, Isao

    2007-01-01

    Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)

  18. The conceptual model of organization social responsibility

    LUO, Lan; WEI, Jingfu

    2014-01-01

    With the developing of the research of CSR, people more and more deeply noticethat the corporate should take responsibility. Whether other organizations besides corporatesshould not take responsibilities beyond their field? This paper puts forward theconcept of organization social responsibility on the basis of the concept of corporate socialresponsibility and other theories. And the conceptual models are built based on theconception, introducing the OSR from three angles: the types of organi...

  19. Hidden Markov Item Response Theory Models for Responses and Response Times.

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  20. Response moderation models for conditional dependence between response time and response accuracy.

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan

    2017-05-01

    It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.

  1. Rhythmic entrainment source separation: Optimizing analyses of neural responses to rhythmic sensory stimulation

    Cohen, M.S.; Gulbinaite, R.

    2017-01-01

    Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differen...

  2. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  3. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  4. Transcriptome-wide analyses indicate mitochondrial responses to particulate air pollution exposure

    Winckelmans, Ellen; Nawrot, Tim S.; Tsamou, Maria

    2017-01-01

    validation cohort (n = 169, 55.6% women). Results: Overrepresentation analyses revealed significant pathways (p-value transport chain (ETC) for medium-term exposure in women. For men, medium-term PM10....... Conclusions: In this exploratory study, we identified mitochondrial genes and pathways associated with particulate air pollution indicating upregulation of energy producing pathways as a potential mechanism to compensate for PM-induced mitochondrial damage....

  5. arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

    Ellis, John; Marzola, Luca; Raidal, Martti

    2018-06-12

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  6. Transcriptomic and proteomic analyses of the Aspergillus fumigatus hypoxia response using an oxygen-controlled fermenter

    Barker Bridget M

    2012-02-01

    Full Text Available Abstract Background Aspergillus fumigatus is a mold responsible for the majority of cases of aspergillosis in humans. To survive in the human body, A. fumigatus must adapt to microenvironments that are often characterized by low nutrient and oxygen availability. Recent research suggests that the ability of A. fumigatus and other pathogenic fungi to adapt to hypoxia contributes to their virulence. However, molecular mechanisms of A. fumigatus hypoxia adaptation are poorly understood. Thus, to better understand how A. fumigatus adapts to hypoxic microenvironments found in vivo during human fungal pathogenesis, the dynamic changes of the fungal transcriptome and proteome in hypoxia were investigated over a period of 24 hours utilizing an oxygen-controlled fermenter system. Results Significant increases in transcripts associated with iron and sterol metabolism, the cell wall, the GABA shunt, and transcriptional regulators were observed in response to hypoxia. A concomitant reduction in transcripts was observed with ribosome and terpenoid backbone biosynthesis, TCA cycle, amino acid metabolism and RNA degradation. Analysis of changes in transcription factor mRNA abundance shows that hypoxia induces significant positive and negative changes that may be important for regulating the hypoxia response in this pathogenic mold. Growth in hypoxia resulted in changes in the protein levels of several glycolytic enzymes, but these changes were not always reflected by the corresponding transcriptional profiling data. However, a good correlation overall (R2 = 0.2, p A. fumigatus. Conclusions Taken together, our data suggest a robust cellular response that is likely regulated both at the transcriptional and post-transcriptional level in response to hypoxia by the human pathogenic mold A. fumigatus. As with other pathogenic fungi, the induction of glycolysis and transcriptional down-regulation of the TCA cycle and oxidative phosphorylation appear to major

  7. Balmorel: A model for analyses of the electricity and CHP markets in the Baltic Sea Region. Appendices

    Ravn, H.F.; Munksgaard, J.; Ramskov, J.; Grohnheit, P.E.; Larsen, H.V.

    2001-03-01

    This report describes the motivations behind the development of the Balmorel model as well as the model itself. The purpose of the Balmorel project is to develop a model for analyses of the power and CHP sectors in the Baltic Sea Region. The model is directed towards the analysis of relevant policy questions to the extent that they contain substantial international aspects. The model is developed in response to the trend towards internationalisation in the electricity sector. This trend is seen in increased international trade of electricity, in investment strategies among producers and otherwise. Also environmental considerations and policies are to an increasing extent gaining an international perspective in relation to the greenhouse gasses. Further, the ongoing process of deregulation of the energy sector highlights this and contributes to the need for overview and analysis. A guiding principle behind the construction of the model has been that it may serve as a means of communication in relation to the policy issues that already are or that may become important for the region. Therefore, emphasis has been put on documentation, transparency and flexibility of the model. This is achieved in part by formulating the model in a high level modelling language, and by making the model, including data, available at the internet. Potential users of the Balmorel model include research institutions, consulting companies, energy authorities, transmission system operators and energy companies. (au)

  8. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  9. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems

    Vredenberg, W.J.

    2011-01-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with

  10. A new emergency response model for MACCS. Final report

    Chanin, D.I.

    1992-01-01

    Under DOE sponsorship, as directed by the Los Alamos National Laboratory (LANL), the MACCS code (version 1.5.11.1) [Ch92] was modified to implement a series of improvements in its modeling of emergency response actions. The purpose of this effort has been to aid the Westinghouse Savannah River Company (WSRC) in its performance of the Level III analysis for the Savannah River Site (SRS) probabilistic risk analysis (PRA) of K Reactor [Wo90]. To ensure its usefulness to WSRC, and facilitate the new model's eventual merger with other MACCS enhancements, close cooperation with WSRC and the MACCS development team at Sandia National Laboratories (SNL) was maintained throughout the project. These improvements are intended to allow a greater degree of flexibility in modeling the mitigative actions of evacuation and sheltering. The emergency response model in MACCS version 1.5.11.1 was developed to support NRC analyses of consequences from severe accidents at commercial nuclear power plants. The NRC code imposes unnecessary constraints on DOE safety analyses, particularly for consequences to onsite worker populations, and it has therefore been revamped. The changes to the code have been implemented in a manner that preserves previous modeling capabilities and therefore prior analyses can be repeated with the new code

  11. Analysing and controlling the tax evasion dynamics via majority-vote model

    Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  12. Analysing and controlling the tax evasion dynamics via majority-vote model

    Lima, F W S

    2010-01-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  13. Modeling the frequency response of photovoltaic inverters

    Ernauli Christine Aprilia, A.; Cuk, V.; Cobben, J.F.G.; Ribeiro, P.F.; Kling, W.L.

    2012-01-01

    The increased presence of photovoltaic (PV) systems inevitably affects the power quality in the grid. This new reality demands grid power quality studies involving PV inverters. This paper proposes several frequency response models in the form of equivalent circuits. Models are based on laboratory

  14. Corporate Social Responsibility Agreements Model for Community ...

    Corporate Social Responsibility Agreements Model for Community ... their host communities with concomitant adverse effect on mining operations. ... sustainable community development an integral part of the mining business. This paper presents the evolutionary strategic models, with differing principles and action plans, ...

  15. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  16. Tropical cyclones in two atmospheric (re)analyses and their response in two oceanic reanalyses

    Jourdain, N.C.; Barnier, B.; Ferry, N.; Vialard, J.; Menkes, C.E.; Lengaigne, M.; Parent, L.

    composites in Fig. 7). We focus our description of the mechanisms on strong TCs in GLORYS1 to emphasize the major issues in the ocean reanalyses, and GLORYS2 is not shown because it has a very similar temperature response to GLORYS1. In the top 30 m and over...

  17. Dignity and cost-effectiveness: analysing the responsibility for decisions in medical ethics.

    Robertson, G S

    1984-01-01

    In the operation of a health care system, defining the limits of medical care is the joint responsibility of many parties including clinicians, patients, philosophers and politicians. It is suggested that changes in the potential for prolonging life make it necessary to give doctors guidance which may have to incorporate certain features of utilitarianism, individualism and patient-autonomy. PMID:6502644

  18. Temporal Analyses of the Response of Intervertebral Disc Cells and Mesenchymal Stem Cells to Nutrient Deprivation

    Sarah A. Turner

    2016-01-01

    Full Text Available Much emphasis has been placed recently on the repair of degenerate discs using implanted cells, such as disc cells or bone marrow derived mesenchymal stem cells (MSCs. This study examines the temporal response of bovine and human nucleus pulposus (NP cells and MSCs cultured in monolayer following exposure to altered levels of glucose (0, 3.15, and 4.5 g/L and foetal bovine serum (0, 10, and 20% using an automated time-lapse imaging system. NP cells were also exposed to the cell death inducers, hydrogen peroxide and staurosporine, in comparison to serum starvation. We have demonstrated that human NP cells show an initial “shock” response to reduced nutrition (glucose. However, as time progresses, NP cells supplemented with serum recover with minimal evidence of cell death. Human NP cells show no evidence of proliferation in response to nutrient supplementation, whereas MSCs showed greater response to increased nutrition. When specifically inducing NP cell death with hydrogen peroxide and staurosporine, as expected, the cell number declined. These results support the concept that implanted NP cells or MSCs may be capable of survival in the nutrient-poor environment of the degenerate human disc, which has important clinical implications for the development of IVD cell therapies.

  19. Transcriptomic and proteomic analyses of the Aspergillus fumigatus hypoxia response using an oxygen-controlled fermenter

    2012-01-01

    Background Aspergillus fumigatus is a mold responsible for the majority of cases of aspergillosis in humans. To survive in the human body, A. fumigatus must adapt to microenvironments that are often characterized by low nutrient and oxygen availability. Recent research suggests that the ability of A. fumigatus and other pathogenic fungi to adapt to hypoxia contributes to their virulence. However, molecular mechanisms of A. fumigatus hypoxia adaptation are poorly understood. Thus, to better understand how A. fumigatus adapts to hypoxic microenvironments found in vivo during human fungal pathogenesis, the dynamic changes of the fungal transcriptome and proteome in hypoxia were investigated over a period of 24 hours utilizing an oxygen-controlled fermenter system. Results Significant increases in transcripts associated with iron and sterol metabolism, the cell wall, the GABA shunt, and transcriptional regulators were observed in response to hypoxia. A concomitant reduction in transcripts was observed with ribosome and terpenoid backbone biosynthesis, TCA cycle, amino acid metabolism and RNA degradation. Analysis of changes in transcription factor mRNA abundance shows that hypoxia induces significant positive and negative changes that may be important for regulating the hypoxia response in this pathogenic mold. Growth in hypoxia resulted in changes in the protein levels of several glycolytic enzymes, but these changes were not always reflected by the corresponding transcriptional profiling data. However, a good correlation overall (R2 = 0.2, p proteomics datasets for all time points. The lack of correlation between some transcript levels and their subsequent protein levels suggests another regulatory layer of the hypoxia response in A. fumigatus. Conclusions Taken together, our data suggest a robust cellular response that is likely regulated both at the transcriptional and post-transcriptional level in response to hypoxia by the human pathogenic mold A. fumigatus. As

  20. Experimental data and dose-response models

    Ullrich, R.L.

    1985-01-01

    Dose-response relationships for radiation carcinogenesis have been of interest to biologists, modelers, and statisticians for many years. Despite his interest there are few instances in which there are sufficient experimental data to allow the fitting of various dose-response models. In those experimental systems for which data are available the dose-response curves for tumor induction for the various systems cannot be described by a single model. Dose-response models which have been observed following acute exposures to gamma rays include threshold, quadratic, and linear models. Data on sex, age, and environmental influences of dose suggest a strong role of host factors on the dose response. With decreasing dose rate the effectiveness of gamma ray irradiation tends to decrease in essentially every instance. In those cases in which the high dose rate dose response could be described by a quadratic model, the effect of dose rate is consistent with predictions based on radiation effects on the induction of initial events. Whether the underlying reasons for the observed dose-rate effect is a result of effects on the induction of initial events or is due to effects on the subsequent steps in the carcinogenic process is unknown. Information on the dose response for tumor induction for high LET (linear energy transfer) radiations such as neutrons is even more limited. The observed dose and dose rate data for tumor induction following neutron exposure are complex and do not appear to be consistent with predictions based on models for the induction of initial events

  1. Item Response Theory Analyses of the Parent and Teacher Ratings of the DSM-IV ADHD Rating Scale

    Gomez, Rapson

    2008-01-01

    The graded response model (GRM), which is based on item response theory (IRT), was used to evaluate the psychometric properties of the inattention and hyperactivity/impulsivity symptoms in an ADHD rating scale. To accomplish this, parents and teachers completed the DSM-IV ADHD Rating Scale (DARS; Gomez et al., "Journal of Child Psychology and…

  2. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  3. Simplified Model and Response Analysis for Crankshaft of Air Compressor

    Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang

    2017-11-01

    The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.

  4. Radiation-induced damage analysed by luminescence methods in retrospective dosimetry and emergency response.

    Woda, Clemens; Bassinet, Céline; Trompier, François; Bortolin, Emanuela; Della Monaca, Sara; Fattibene, Paola

    2009-01-01

    The increasing risk of a mass casualty scenario following a large scale radiological accident or attack necessitates the development of appropriate dosimetric tools for emergency response. Luminescence dosimetry has been reliably applied for dose reconstruction in contaminated settlements for several decades and recent research into new materials carried close to the human body opens the possibility of estimating individual doses for accident and emergency dosimetry using the same technique. This paper reviews the luminescence research into materials useful for accident dosimetry and applications in retrospective dosimetry. The properties of the materials are critically discussed with regard to the requirements for population triage. It is concluded that electronic components found within portable electronic devices, such as e.g. mobile phones, are at present the most promising material to function as a fortuitous dosimeter in an emergency response.

  5. Comparative Proteomics Analyses of Pollination Response in Endangered Orchid Species Dendrobium Chrysanthum

    Wei Wang

    2017-11-01

    Full Text Available Pollination is a crucial stage in plant reproductive process. The self-compatibility (SC and self-incompatibility (SI mechanisms determined the plant genetic diversity and species survival. D. chrysanthum is a highly valued ornamental and traditional herbal orchid in Asia but has been declared endangered. The sexual reproduction in D. chrysanthum relies on the compatibility of pollination. To provide a better understanding of the mechanism of pollination, the differentially expressed proteins (DEP between the self-pollination (SP and cross-pollination (CP pistil of D. chrysanthum were investigated using proteomic approaches—two-dimensional electrophoresis (2-DE coupled with tandem mass spectrometry technique. A total of 54 DEP spots were identified in the two-dimensional electrophoresis (2-DE maps between the SP and CP. Gene ontology analysis revealed an array of proteins belonging to following different functional categories: metabolic process (8.94%, response to stimulus (5.69%, biosynthetic process (4.07%, protein folding (3.25% and transport (3.25%. Identification of these DEPs at the early response stage of pollination will hopefully provide new insights in the mechanism of pollination response and help for the conservation of the orchid species.

  6. A model finite-element to analyse the mechanical behavior of a PWR fuel rod

    Galeao, A.C.N.R.; Tanajura, C.A.S.

    1988-01-01

    A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt

  7. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  8. Tests and analyses of 1/4-scale upgraded nine-bay reinforced concrete basement models

    Woodson, S.C.

    1983-01-01

    Two nine-bay prototype structures, a flat plate and two-way slab with beams, were designed in accordance with the 1977 ACI code. A 1/4-scale model of each prototype was constructed, upgraded with timber posts, and statically tested. The development of the timber posts placement scheme was based upon yield-line analyses, punching shear evaluation, and moment-thrust interaction diagrams of the concrete slab sections. The flat plate model and the slab with beams model withstood approximate overpressures of 80 and 40 psi, respectively, indicating that required hardness may be achieved through simple upgrading techniques

  9. Beta-Poisson model for single-cell RNA-seq data analyses.

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. The variants of an LOD of a 3D building model and their influence on spatial analyses

    Biljecki, Filip; Ledoux, Hugo; Stoter, Jantien; Vosselman, George

    2016-06-01

    The level of detail (LOD) of a 3D city model indicates the model's grade and usability. However, there exist multiple valid variants of each LOD. As a consequence, the LOD concept is inconclusive as an instruction for the acquisition of 3D city models. For instance, the top surface of an LOD1 block model may be modelled at the eaves of a building or at its ridge height. Such variants, which we term geometric references, are often overlooked and are usually not documented in the metadata. Furthermore, the influence of a particular geometric reference on the performance of a spatial analysis is not known. In response to this research gap, we investigate a variety of LOD1 and LOD2 geometric references that are commonly employed, and perform numerical experiments to investigate their relative difference when used as input for different spatial analyses. We consider three use cases (estimation of the area of the building envelope, building volume, and shadows cast by buildings), and compute the deviations in a Monte Carlo simulation. The experiments, carried out with procedurally generated models, indicate that two 3D models representing the same building at the same LOD, but modelled according to different geometric references, may yield substantially different results when used in a spatial analysis. The outcome of our experiments also suggests that the geometric reference may have a bigger influence than the LOD, since an LOD1 with a specific geometric reference may yield a more accurate result than when using LOD2 models.

  11. Multiscale modeling of mucosal immune responses

    2015-01-01

    Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM. Background Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Implementation Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. Conclusion We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut

  12. Multiscale modeling of mucosal immune responses.

    Mei, Yongguo; Abedi, Vida; Carbo, Adria; Zhang, Xiaoying; Lu, Pinyi; Philipson, Casandra; Hontecillas, Raquel; Hoops, Stefan; Liles, Nathan; Bassaganya-Riera, Josep

    2015-01-01

    Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut inflammation. Our modeling predictions dissect the mechanisms by which effector CD4+ T cell responses contribute to tissue damage in the gut mucosa following immune dysregulation.Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T

  13. Physiological and proteomic analyses of salt stress response in the halophyte Halogeton glomeratus.

    Wang, Juncheng; Meng, Yaxiong; Li, Baochun; Ma, Xiaole; Lai, Yong; Si, Erjing; Yang, Ke; Xu, Xianliang; Shang, Xunwu; Wang, Huajun; Wang, Di

    2015-04-01

    Very little is known about the adaptation mechanism of Chenopodiaceae Halogeton glomeratus, a succulent annual halophyte, under saline conditions. In this study, we investigated the morphological and physiological adaptation mechanisms of seedlings exposed to different concentrations of NaCl treatment for 21 d. Our results revealed that H. glomeratus has a robust ability to tolerate salt; its optimal growth occurs under approximately 100 mm NaCl conditions. Salt crystals were deposited in water-storage tissue under saline conditions. We speculate that osmotic adjustment may be the primary mechanism of salt tolerance in H. glomeratus, which transports toxic ions such as sodium into specific salt-storage cells and compartmentalizes them in large vacuoles to maintain the water content of tissues and the succulence of the leaves. To investigate the molecular response mechanisms to salt stress in H. glomeratus, we conducted a comparative proteomic analysis of seedling leaves that had been exposed to 200 mm NaCl for 24 h, 72 h and 7 d. Forty-nine protein spots, exhibiting significant changes in abundance after stress, were identified using matrix-assisted laser desorption ionization tandem time-of-flight mass spectrometry (MALDI-TOF/TOF MS/MS) and similarity searches across EST database of H. glomeratus. These stress-responsive proteins were categorized into nine functional groups, such as photosynthesis, carbohydrate and energy metabolism, and stress and defence response. © 2014 The Authors. Plant, Cell & Environment published by John Wiley & Sons Ltd.

  14. Proteomic and Physiological Analyses Reveal Putrescine Responses in Roots of Cucumber Stressed by NaCl

    Yinghui Yuan

    2016-07-01

    Full Text Available Soil salinity is a major environmental constraint that threatens agricultural productivity. Different strategies have been developed to improve crop salt tolerance, among which the effects of polyamines have been well reported. To gain a better understanding of the cucumber (Cucumis sativus L. responses to NaCl and unravel the underlying mechanism of exogenous putrescine (Put alleviating salt-induced damage, comparative proteomic analysis was conducted on cucumber roots treated with NaCl and/or Put for 7 days. The results showed that exogenous Put restored the root growth inhibited by NaCl. 62 differentially expressed proteins implicated in various biological processes were successfully identified by MALDI-TOF/TOF MS. The four largest categories included proteins involved in defense response (24.2%, protein metabolism (24.2%, carbohydrate metabolism (19.4% and amino acid metabolism (14.5%. Exogenous Put up-regulated most identified proteins involved in carbohydrate metabolism, implying an enhancement in energy generation. Proteins involved in defense response and protein metabolism were differently regulated by Put, which indicated the roles of Put in stress resistance and proteome rearrangement. Put also increased the abundance of proteins involved in amino acid metabolism. Meanwhile, physiological analysis showed that Put could further up-regulated the levels of free amino acids in salt stressed-roots. In addition, Put also improved endogenous polyamines contents by regulating the transcription levels of key enzymes in polyamine metabolism. Taken together, these results suggest that Put may alleviate NaCl-induced growth inhibition through degradation of misfolded/damaged proteins, activation of stress defense, and the promotion of carbohydrate metabolism to generate more energy.

  15. Single toxin dose-response models revisited

    Demidenko, Eugene, E-mail: eugened@dartmouth.edu [Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH03756 (United States); Glaholt, SP, E-mail: sglaholt@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States); Kyker-Snowman, E, E-mail: ek2002@wildcats.unh.edu [Department of Natural Resources and the Environment, University of New Hampshire, Durham, NH03824 (United States); Shaw, JR, E-mail: joeshaw@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Chen, CY, E-mail: Celia.Y.Chen@dartmouth.edu [Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States)

    2017-01-01

    The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the four models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.

  16. Autonomous journaling response using data model LUTS

    Jaenisch, Holger; Handley, James; Albritton, Nathaniel; Whitener, David; Burnett, Randel; Caspers, Robert; Moren, Stephen; Alexander, Thomas; Maddox, William, III; Albritton, William, Jr.

    2009-04-01

    Matching journal entries to appropriate context responses can be a daunting problem, especially when there are no salient keyword matches between the entry and the proposed library of appropriate responses. We examine a real-world application for matching interactive journaling requests for guidance to an a priori established archive of sufficient multimedia responses. We show the analysis required to enable a Data Model based algorithm to group journaling entries according to intrinsic context information and type. We demonstrate a new lookup table (LUT) classifier that exploits all available data in LUT form.

  17. Transcriptomic responses of European flounder (Platichthys flesus) to model toxicants

    Williams, Tim D. [School of Biosciences, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)], E-mail: t.d.williams@bham.ac.uk; Diab, Amer [Institute of Aquaculture, University of Stirling, Stirling, Scotland FK9 4LA (United Kingdom); Ortega, Fernando [School of Biosciences, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom); Sabine, Victoria S. [Institute of Aquaculture, University of Stirling, Stirling, Scotland FK9 4LA (United Kingdom); Godfrey, Rita E.; Falciani, Francesco; Chipman, J. Kevin [School of Biosciences, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom); George, Stephen G. [Institute of Aquaculture, University of Stirling, Stirling, Scotland FK9 4LA (United Kingdom)

    2008-11-11

    The temporal transcriptomic responses in liver of Platichthys flesus to model environmental pollutants were studied over a 16-day time span after intraperitoneal injection with cadmium chloride (50 {mu}g/kg in saline), 3-methylcholanthrene (25 mg/kg in olive oil), Aroclor 1254 (50 mg/kg in olive oil), tert-butyl-hydroperoxide (5 mg/kg in saline), Lindane (25 mg/kg in olive oil), perfluoro-octanoic acid (100 mg/kg in olive oil) and their vehicles, olive oil (1 ml/kg) or saline (0.9%). Statistical, gene ontology and supervised analysis clearly demonstrated the progression from acute effects, biological responses to and recovery from the treatments. Key biological processes disturbed by the individual treatments were characterised by gene ontology analyses and individual toxicant-responsive genes and pathways were identified by supervised analyses. Responses to the polyaromatic and chlorinated aromatic compounds showed a degree of commonality but were distinguishable and they were clearly segregated from the responses to the pro-oxidants cadmium and the organic hydroperoxide, as well as from the peroxisomal proliferator, perfluoro-octanoic acid. This study demonstrated the utility of the microarray technique in the identification of toxicant-responsive genes and in discrimination between modes of toxicant action.

  18. Transcriptomic responses of European flounder (Platichthys flesus) to model toxicants

    Williams, Tim D.; Diab, Amer; Ortega, Fernando; Sabine, Victoria S.; Godfrey, Rita E.; Falciani, Francesco; Chipman, J. Kevin; George, Stephen G.

    2008-01-01

    The temporal transcriptomic responses in liver of Platichthys flesus to model environmental pollutants were studied over a 16-day time span after intraperitoneal injection with cadmium chloride (50 μg/kg in saline), 3-methylcholanthrene (25 mg/kg in olive oil), Aroclor 1254 (50 mg/kg in olive oil), tert-butyl-hydroperoxide (5 mg/kg in saline), Lindane (25 mg/kg in olive oil), perfluoro-octanoic acid (100 mg/kg in olive oil) and their vehicles, olive oil (1 ml/kg) or saline (0.9%). Statistical, gene ontology and supervised analysis clearly demonstrated the progression from acute effects, biological responses to and recovery from the treatments. Key biological processes disturbed by the individual treatments were characterised by gene ontology analyses and individual toxicant-responsive genes and pathways were identified by supervised analyses. Responses to the polyaromatic and chlorinated aromatic compounds showed a degree of commonality but were distinguishable and they were clearly segregated from the responses to the pro-oxidants cadmium and the organic hydroperoxide, as well as from the peroxisomal proliferator, perfluoro-octanoic acid. This study demonstrated the utility of the microarray technique in the identification of toxicant-responsive genes and in discrimination between modes of toxicant action

  19. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate...... cellular automata (CA) to accomplish spatially explicit land-use change modelling. Spatial interaction between neighbour land-uses is an important component in urban cellular automata. Nevertheless, this component is calibrated through trial-and-error estimation. The aim of the current research project has...... been to quantify and analyse land-use neighbourhood characteristics and impart useful information for cell based land-use modelling. The results of our research is a major step forward, because we have estimated rules for neighbourhood interaction from really observed land-use changes at a yearly basis...

  20. Wave response analyses of floating crane structure; Crane sen no jobu kozobutsu no haro oto

    Nobukawa, H.; Takaki, M.; Kitamura, M.; Ahou, G. [Hiroshima University, Hiroshima (Japan). Faculty of Engineering; Higashimura, M. [Fukada Salvage and Marine Works Co. Ltd., Osaka (Japan)

    1996-12-31

    Identifying a dynamic load acting on a lifted load in a floating crane moving in waves is important for preparing an operation manual for the floating crane. Analyses were made on motions in waves of a floating crane with a lifting load of 3,600 tons, with considerations given to deformation of the crane structure. Discussions were given on a dynamic load acting on a lifted load. If a case that considers elastic deformation in the crane structure is compared with a case that does not consider same in calculating hull motions of the floating crane, the difference between them is small if wave length {lambda} to the ship length L is about 0.5. However, if {lambda}/L is 1.0 and 1.5, the difference grows very large. Therefore, the effect of deformation in the crane structure on hull motions of the floating crane cannot be ignored in these cases. A dynamic load acting on a lifted load that considers deformation in the crane structure is about 5% of lifted weight in a headsea condition in which the wave height is 2 m and {lambda}/L is 1.5. As opposed, an estimated value of a dynamic load when the crane structure is regarded as a rigid body is 13%, which is 2.6 times as great as the case that considers deformation of the crane structure. 3 refs., 17 figs., 1 tab.

  1. Wave response analyses of floating crane structure; Crane sen no jobu kozobutsu no haro oto

    Nobukawa, H; Takaki, M; Kitamura, M; Ahou, G [Hiroshima University, Hiroshima (Japan). Faculty of Engineering; Higashimura, M [Fukada Salvage and Marine Works Co. Ltd., Osaka (Japan)

    1997-12-31

    Identifying a dynamic load acting on a lifted load in a floating crane moving in waves is important for preparing an operation manual for the floating crane. Analyses were made on motions in waves of a floating crane with a lifting load of 3,600 tons, with considerations given to deformation of the crane structure. Discussions were given on a dynamic load acting on a lifted load. If a case that considers elastic deformation in the crane structure is compared with a case that does not consider same in calculating hull motions of the floating crane, the difference between them is small if wave length {lambda} to the ship length L is about 0.5. However, if {lambda}/L is 1.0 and 1.5, the difference grows very large. Therefore, the effect of deformation in the crane structure on hull motions of the floating crane cannot be ignored in these cases. A dynamic load acting on a lifted load that considers deformation in the crane structure is about 5% of lifted weight in a headsea condition in which the wave height is 2 m and {lambda}/L is 1.5. As opposed, an estimated value of a dynamic load when the crane structure is regarded as a rigid body is 13%, which is 2.6 times as great as the case that considers deformation of the crane structure. 3 refs., 17 figs., 1 tab.

  2. Comparative analyses of bicyclists and motorcyclists in vehicle collisions focusing on head impact responses.

    Wang, Xinghua; Peng, Yong; Yi, Shengen

    2017-11-01

    To investigate the differences of the head impact responses between bicyclists and motorcyclists in vehicle collisions. A series of vehicle-bicycle and vehicle-motorcycle lateral impact simulations on four vehicle types at seven vehicle speeds (30, 35, 40, 45, 50, 55 and 60 km/h) and three two-wheeler moving speeds (5, 7.5 and 10 km/h for bicycle, 10, 12.5 and 15 km/h for motorcycle) were established based on PC-Crash software. To further comprehensively explore the differences, additional impact scenes with other initial conditions, such as impact angle (0, π/3, 2π/3 and π) and impact position (left, middle and right part of vehicle front-end), also were supplemented. And then, extensive comparisons were accomplished with regard to average head peak linear acceleration, average head impact speed, average head peak angular acceleration, average head peak angular speed and head injury severity. The results showed there were prominent differences of kinematics and body postures for bicyclists and motorcyclists even under same impact conditions. The variations of bicyclist head impact responses with the changing of impact conditions were a far cry from that of motorcyclists. The average head peak linear acceleration, average head impact speed and average head peak angular acceleration values were higher for motorcyclists than for bicyclists in most cases, while the bicyclists received greater average head peak angular speed values. And the head injuries of motorcyclists worsened faster with increased vehicle speed. The results may provide even deeper understanding of two-wheeler safety and contribute to improve the public health affected by road traffic accidents.

  3. Modular 3-D solid finite element model for fatigue analyses of a PWR coolant system

    Garrido, Oriol Costa; Cizelj, Leon; Simonovski, Igor

    2012-01-01

    Highlights: ► A 3-D model of a reactor coolant system for fatigue usage assessment. ► The performed simulations are a heat transfer and stress analyses. ► The main results are the expected ranges of fatigue loadings. - Abstract: The extension of operational licenses of second generation pressurized water reactor (PWR) nuclear power plants depends to a large extent on the analyses of fatigue usage of the reactor coolant pressure boundary. The reliable estimation of the fatigue usage requires detailed thermal and stress analyses of the affected components. Analyses, based upon the in-service transient loads should be compared to the loads analyzed at the design stage. The thermal and stress transients can be efficiently analyzed using the finite element method. This requires that a 3-D solid model of a given system is discretized with finite elements (FE). The FE mesh density is crucial for both the accuracy and the cost of the analysis. The main goal of the paper is to propose a set of computational tools which assist a user in a deployment of modular spatial FE model of main components of a typical reactor coolant system, e.g., pipes, pressure vessels and pumps. The modularity ensures that the components can be analyzed individually or in a system. Also, individual components can be meshed with different mesh densities, as required by the specifics of the particular transient studied. For optimal accuracy, all components are meshed with hexahedral elements with quadratic interpolation. The performance of the model is demonstrated with simulations performed with a complete two-loop PWR coolant system (RCS). Heat transfer analysis and stress analysis for a complete loading and unloading cycle of the RCS are performed. The main results include expected ranges of fatigue loading for the pipe lines and coolant pump components under the given conditions.

  4. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  5. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    Suolanen, V.; Ilvonen, M.

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m 2 ). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an easily

  6. Model tests and numerical analyses on horizontal impedance functions of inclined single piles embedded in cohesionless soil

    Goit, Chandra Shekhar; Saitoh, Masato

    2013-03-01

    Horizontal impedance functions of inclined single piles are measured experimentally for model soil-pile systems with both the effects of local soil nonlinearity and resonant characteristics. Two practical pile inclinations of 5° and 10° in addition to a vertical pile embedded in cohesionless soil and subjected to lateral harmonic pile head loadings for a wide range of frequencies are considered. Results obtained with low-to-high amplitude of lateral loadings on model soil-pile systems encased in a laminar shear box show that the local nonlinearities have a profound impact on the horizontal impedance functions of piles. Horizontal impedance functions of inclined piles are found to be smaller than the vertical pile and the values decrease as the angle of pile inclination increases. Distinct values of horizontal impedance functions are obtained for the `positive' and `negative' cycles of harmonic loadings, leading to asymmetric force-displacement relationships for the inclined piles. Validation of these experimental results is carried out through three-dimensional nonlinear finite element analyses, and the results from the numerical models are in good agreement with the experimental data. Sensitivity analyses conducted on the numerical models suggest that the consideration of local nonlinearity at the vicinity of the soil-pile interface influence the response of the soil-pile systems.

  7. Diagnostics for Linear Models With Functional Responses

    Xu, Hongquan; Shen, Qing

    2005-01-01

    Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...

  8. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments

    Vogt, Frank, E-mail: fvogt@utk.edu; White, Lauren

    2015-03-31

    Highlights: • Microalgae transform large quantities of inorganics into biomass. • Microalgae interact with their growing environment and adapt their chemical composition. • Sequestration capabilities are dependent on cells’ chemical environments. • We develop a chemometric hard-modeling to describe these chemical adaptation dynamics. • This methodology will enable studies of microalgal compound sequestration. - Abstract: Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO{sub 2}. On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae’s sequestration capabilities and productivity are also determined by the cells’ chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass

  9. Spectroscopic analyses of chemical adaptation processes within microalgal biomass in response to changing environments

    Vogt, Frank; White, Lauren

    2015-01-01

    Highlights: • Microalgae transform large quantities of inorganics into biomass. • Microalgae interact with their growing environment and adapt their chemical composition. • Sequestration capabilities are dependent on cells’ chemical environments. • We develop a chemometric hard-modeling to describe these chemical adaptation dynamics. • This methodology will enable studies of microalgal compound sequestration. - Abstract: Via photosynthesis, marine phytoplankton transforms large quantities of inorganic compounds into biomass. This has considerable environmental impacts as microalgae contribute for instance to counter-balancing anthropogenic releases of the greenhouse gas CO 2 . On the other hand, high concentrations of nitrogen compounds in an ecosystem can lead to harmful algae blooms. In previous investigations it was found that the chemical composition of microalgal biomass is strongly dependent on the nutrient availability. Therefore, it is expected that algae’s sequestration capabilities and productivity are also determined by the cells’ chemical environments. For investigating this hypothesis, novel analytical methodologies are required which are capable of monitoring live cells exposed to chemically shifting environments followed by chemometric modeling of their chemical adaptation dynamics. FTIR-ATR experiments have been developed for acquiring spectroscopic time series of live Dunaliella parva cultures adapting to different nutrient situations. Comparing experimental data from acclimated cultures to those exposed to a chemically shifted nutrient situation reveals insights in which analyte groups participate in modifications of microalgal biomass and on what time scales. For a chemometric description of these processes, a data model has been deduced which explains the chemical adaptation dynamics explicitly rather than empirically. First results show that this approach is feasible and derives information about the chemical biomass adaptations

  10. BWR Mark III containment analyses using a GOTHIC 8.0 3D model

    Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat

    2015-01-01

    Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences

  11. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  12. Linear time delay methods and stability analyses of the human spine. Effects of neuromuscular reflex response.

    Franklin, Timothy C; Granata, Kevin P; Madigan, Michael L; Hendricks, Scott L

    2008-08-01

    Linear stability methods were applied to a biomechanical model of the human musculoskeletal spine to investigate effects of reflex gain and reflex delay on stability. Equations of motion represented a dynamic 18 degrees-of-freedom rigid-body model with time-delayed reflexes. Optimal muscle activation levels were identified by minimizing metabolic power with the constraints of equilibrium and stability with zero reflex time delay. Muscle activation levels and associated muscle forces were used to find the delay margin, i.e., the maximum reflex delay for which the system was stable. Results demonstrated that stiffness due to antagonistic co-contraction necessary for stability declined with increased proportional reflex gain. Reflex delay limited the maximum acceptable proportional reflex gain, i.e., long reflex delay required smaller maximum reflex gain to avoid instability. As differential reflex gain increased, there was a small increase in acceptable reflex delay. However, differential reflex gain with values near intrinsic damping caused the delay margin to approach zero. Forward-dynamic simulations of the fully nonlinear time-delayed system verified the linear results. The linear methods accurately found the delay margin below which the nonlinear system was asymptotically stable. These methods may aid future investigations in the role of reflexes in musculoskeletal stability.

  13. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  14. Mesoscale Modelling of the Response of Aluminas

    Bourne, N. K.

    2006-01-01

    The response of polycrystalline alumina to shock is not well addressed. There are several operating mechanisms that only hypothesized which results in models which are empirical. A similar state of affairs in reactive flow modelling led to the development of mesoscale representations of the flow to illuminate operating mechanisms. In this spirit, a similar effort is undergone for a polycrystalline alumina. Simulations are conducted to observe operating mechanisms at the micron scale. A method is then developed to extend the simulations to meet response at the continuum level where measurements are made. The approach is validated by comparison with continuum experiments. The method and results are presented, and some of the operating mechanisms are illuminated by the observed response

  15. Lawyer Proliferation and the Social Responsibility Model.

    Wines, William A.

    1989-01-01

    Drawing on the model of social responsibility that colleges of business have been teaching, the boom in lawyer education is examined. It is argued that law schools are irresponsible in overselling the benefits of law school graduation, creating a surplus of lawyers whose abilities could be used as well elsewhere. (MSE)

  16. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  17. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  18. Item response theory analysis of the life orientation test-revised: age and gender differential item functioning analyses.

    Steca, Patrizia; Monzani, Dario; Greco, Andrea; Chiesi, Francesca; Primi, Caterina

    2015-06-01

    This study is aimed at testing the measurement properties of the Life Orientation Test-Revised (LOT-R) for the assessment of dispositional optimism by employing item response theory (IRT) analyses. The LOT-R was administered to a large sample of 2,862 Italian adults. First, confirmatory factor analyses demonstrated the theoretical conceptualization of the construct measured by the LOT-R as a single bipolar dimension. Subsequently, IRT analyses for polytomous, ordered response category data were applied to investigate the items' properties. The equivalence of the items across gender and age was assessed by analyzing differential item functioning. Discrimination and severity parameters indicated that all items were able to distinguish people with different levels of optimism and adequately covered the spectrum of the latent trait. Additionally, the LOT-R appears to be gender invariant and, with minor exceptions, age invariant. Results provided evidence that the LOT-R is a reliable and valid measure of dispositional optimism. © The Author(s) 2014.

  19. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  20. Modeling the mechanical response of PBX 9501

    Ragaswamy, Partha [Los Alamos National Laboratory; Lewis, Matthew W [Los Alamos National Laboratory; Liu, Cheng [Los Alamos National Laboratory; Thompson, Darla G [Los Alamos National Laboratory

    2010-01-01

    An engineering overview of the mechanical response of Plastic-Bonded eXplosives (PBXs), specifically PBX 9501, will be provided with emphasis on observed mechanisms associated with different types of mechanical testing. Mechanical tests in the form of uniaxial tension, compression, cyclic loading, creep (compression and tension), and Hopkinson bar show strain rate and temperature dependence. A range of mechanical behavior is observed which includes small strain recoverable response in the form of viscoelasticity; change in stiffness and softening beyond peak strength due to damage in the form microcracks, debonding, void formation and the growth of existing voids; inelastic response in the form of irrecoverable strain as shown in cyclic tests, and viscoelastic creep combined with plastic response as demonstrated in creep and recovery tests. The main focus of this paper is to elucidate the challenges and issues involved in modeling the mechanical behavior of PBXs for simulating thermo-mechanical responses in engineering components. Examples of validation of a constitutive material model based on a few of the observed mechanisms will be demonstrated against three point bending, split Hopkinson pressure bar and Brazilian disk geometry.

  1. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  2. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  3. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  4. NGC1300 dynamics - II. The response models

    Kalapotharakos, C.; Patsis, P. A.; Grosbøl, P.

    2010-10-01

    We study the stellar response in a spectrum of potentials describing the barred spiral galaxy NGC1300. These potentials have been presented in a previous paper and correspond to three different assumptions as regards the geometry of the galaxy. For each potential we consider a wide range of Ωp pattern speed values. Our goal is to discover the geometries and the Ωp supporting specific morphological features of NGC1300. For this purpose we use the method of response models. In order to compare the images of NGC1300 with the density maps of our models, we define a new index which is a generalization of the Hausdorff distance. This index helps us to find out quantitatively which cases reproduce specific features of NGC1300 in an objective way. Furthermore, we construct alternative models following a Schwarzschild-type technique. By this method we vary the weights of the various energy levels, and thus the orbital contribution of each energy, in order to minimize the differences between the response density and that deduced from the surface density of the galaxy, under certain assumptions. We find that the models corresponding to Ωp ~ 16 and 22 kms-1kpc-1 are able to reproduce efficiently certain morphological features of NGC1300, with each one having its advantages and drawbacks. Based on observations collected at the European Southern Observatory, Chile: programme ESO 69.A-0021. E-mail: ckalapot@phys.uoa.gr (CK); patsis@academyofathens.gr (PAP); pgrosbol@eso.org (PG)

  5. Population-expression models of immune response

    Stromberg, Sean P; Antia, Rustom; Nemenman, Ilya

    2013-01-01

    The immune response to a pathogen has two basic features. The first is the expansion of a few pathogen-specific cells to form a population large enough to control the pathogen. The second is the process of differentiation of cells from an initial naive phenotype to an effector phenotype which controls the pathogen, and subsequently to a memory phenotype that is maintained and responsible for long-term protection. The expansion and the differentiation have been considered largely independently. Changes in cell populations are typically described using ecologically based ordinary differential equation models. In contrast, differentiation of single cells is studied within systems biology and is frequently modeled by considering changes in gene and protein expression in individual cells. Recent advances in experimental systems biology make available for the first time data to allow the coupling of population and high dimensional expression data of immune cells during infections. Here we describe and develop population-expression models which integrate these two processes into systems biology on the multicellular level. When translated into mathematical equations, these models result in non-conservative, non-local advection-diffusion equations. We describe situations where the population-expression approach can make correct inference from data while previous modeling approaches based on common simplifying assumptions would fail. We also explore how model reduction techniques can be used to build population-expression models, minimizing the complexity of the model while keeping the essential features of the system. While we consider problems in immunology in this paper, we expect population-expression models to be more broadly applicable. (paper)

  6. A chip-level modeling approach for rail span collapse and survivability analyses

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  7. Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness

    Dawson, P.; Paton, A.A.; Fleischer, C.C.

    1990-01-01

    This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)

  8. Modelling cladding response to changing conditions

    Tulkki, Ville; Ikonen, Timo [VTT Technical Research Centre of Finland ltd (Finland)

    2016-11-15

    The cladding of the nuclear fuel is subjected to varying conditions during fuel reactor life. Load drops and reversals can be modelled by taking cladding viscoelastic behaviour into account. Viscoelastic contribution to the deformation of metals is usually considered small enough to be ignored, and in many applications it merely contributes to the primary part of the creep curve. With nuclear fuel cladding the high temperature and irradiation as well as the need to analyse the variable load all emphasise the need to also inspect the viscoelasticity of the cladding.

  9. Integrated tokamak modelling with the fast-ion Fokker–Planck solver adapted for transient analyses

    Toma, M; Hamamatsu, K; Hayashi, N; Honda, M; Ide, S

    2015-01-01

    Integrated tokamak modelling that enables the simulation of an entire discharge period is indispensable for designing advanced tokamak plasmas. For this purpose, we extend the integrated code TOPICS to make it more suitable for transient analyses in the fast-ion part. The fast-ion Fokker–Planck solver is integrated into TOPICS at the same level as the bulk transport solver so that the time evolutions of the fast ion and the bulk plasma are consistent with each other as well as with the equilibrium magnetic field. The fast-ion solver simultaneously handles neutral beam-injected ions and alpha particles. Parallelisation of the fast-ion solver in addition to its computational lightness owing to a dimensional reduction in the phase space enables transient analyses for long periods in the order of tens of seconds. The fast-ion Fokker–Planck calculation is compared and confirmed to be in good agreement with an orbit following a Monte Carlo calculation. The integrated code is applied to ramp-up simulations for JT-60SA and ITER to confirm its capability and effectiveness in transient analyses. In the integrated simulations, the coupled evolution of the fast ions, plasma profiles, and equilibrium magnetic fields are presented. In addition, the electric acceleration effect on fast ions is shown and discussed. (paper)

  10. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  11. Modeling of Dynamic Responses in Building Insulation

    Anna Antonyová

    2015-10-01

    Full Text Available In this research a measurement systemwas developedfor monitoring humidity and temperature in the cavity between the wall and the insulating material in the building envelope. This new technology does not disturb the insulating material during testing. The measurement system can also be applied to insulation fixed ten or twenty years earlier and sufficiently reveals the quality of the insulation. A mathematical model is proposed to characterize the dynamic responses in the cavity between the wall and the building insulation as influenced by weather conditions.These dynamic responses are manifested as a delay of both humidity and temperature changes in the cavity when compared with the changes in the ambient surrounding of the building. The process is then modeled through numerical methods and statistical analysis of the experimental data obtained using the new system of measurement.

  12. Modeling response variation for radiometric calorimeters

    Mayer, R.L. II.

    1986-01-01

    Radiometric calorimeters are widely used in the DOE complex for accountability measurements of plutonium and tritium. Proper characterization of response variation for these instruments is, therefore, vital for accurate assessment of measurement control as well as for propagation of error calculations. This is not difficult for instruments used to measure items within a narrow range of power values; however, when a single instrument is used to measure items over a wide range of power values, improper estimates of uncertainty can result since traditional error models for radiometric calorimeters assume that uncertainty is not a function of sample power. This paper describes methods which can be used to accurately estimate random response variation for calorimeters used to measure items over a wide range of sample powers. The model is applicable to the two most common modes of calorimeter operation: heater replacement and servo control. 5 refs., 4 figs., 1 tab

  13. Predicting Footbridge Response using Stochastic Load Models

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...

  14. Evaluation properties of the French version of the OUT-PATSAT35 satisfaction with care questionnaire according to classical and item response theory analyses.

    Panouillères, M; Anota, A; Nguyen, T V; Brédart, A; Bosset, J F; Monnier, A; Mercier, M; Hardouin, J B

    2014-09-01

    The present study investigates the properties of the French version of the OUT-PATSAT35 questionnaire, which evaluates the outpatients' satisfaction with care in oncology using classical analysis (CTT) and item response theory (IRT). This cross-sectional multicenter study includes 692 patients who completed the questionnaire at the end of their ambulatory treatment. CTT analyses tested the main psychometric properties (convergent and divergent validity, and internal consistency). IRT analyses were conducted separately for each OUT-PATSAT35 domain (the doctors, the nurses or the radiation therapists and the services/organization) by models from the Rasch family. We examined the fit of the data to the model expectations and tested whether the model assumptions of unidimensionality, monotonicity and local independence were respected. A total of 605 (87.4%) respondents were analyzed with a mean age of 64 years (range 29-88). Internal consistency for all scales separately and for the three main domains was good (Cronbach's α 0.74-0.98). IRT analyses were performed with the partial credit model. No disordered thresholds of polytomous items were found. Each domain showed high reliability but fitted poorly to the Rasch models. Three items in particular, the item about "promptness" in the doctors' domain and the items about "accessibility" and "environment" in the services/organization domain, presented the highest default of fit. A correct fit of the Rasch model can be obtained by dropping these items. Most of the local dependence concerned items about "information provided" in each domain. A major deviation of unidimensionality was found in the nurses' domain. CTT showed good psychometric properties of the OUT-PATSAT35. However, the Rasch analysis revealed some misfitting and redundant items. Taking the above problems into consideration, it could be interesting to refine the questionnaire in a future study.

  15. Modelling the propagation of social response during a disease outbreak.

    Fast, Shannon M; González, Marta C; Wilson, James M; Markuzon, Natasha

    2015-03-06

    Epidemic trajectories and associated social responses vary widely between populations, with severe reactions sometimes observed. When confronted with fatal or novel pathogens, people exhibit a variety of behaviours from anxiety to hoarding of medical supplies, overwhelming medical infrastructure and rioting. We developed a coupled network approach to understanding and predicting social response. We couple the disease spread and panic spread processes and model them through local interactions between agents. The social contagion process depends on the prevalence of the disease, its perceived risk and a global media signal. We verify the model by analysing the spread of disease and social response during the 2009 H1N1 outbreak in Mexico City and 2003 severe acute respiratory syndrome and 2009 H1N1 outbreaks in Hong Kong, accurately predicting population-level behaviour. This kind of empirically validated model is critical to exploring strategies for public health intervention, increasing our ability to anticipate the response to infectious disease outbreaks. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  17. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  18. Cutting Edge PBPK Models and Analyses: Providing the Basis for Future Modeling Efforts and Bridges to Emerging Toxicology Paradigms

    Jane C. Caldwell

    2012-01-01

    Full Text Available Physiologically based Pharmacokinetic (PBPK models are used for predictions of internal or target dose from environmental and pharmacologic chemical exposures. Their use in human risk assessment is dependent on the nature of databases (animal or human used to develop and test them, and includes extrapolations across species, experimental paradigms, and determination of variability of response within human populations. Integration of state-of-the science PBPK modeling with emerging computational toxicology models is critical for extrapolation between in vitro exposures, in vivo physiologic exposure, whole organism responses, and long-term health outcomes. This special issue contains papers that can provide the basis for future modeling efforts and provide bridges to emerging toxicology paradigms. In this overview paper, we present an overview of the field and introduction for these papers that includes discussions of model development, best practices, risk-assessment applications of PBPK models, and limitations and bridges of modeling approaches for future applications. Specifically, issues addressed include: (a increased understanding of human variability of pharmacokinetics and pharmacodynamics in the population, (b exploration of mode of action hypotheses (MOA, (c application of biological modeling in the risk assessment of individual chemicals and chemical mixtures, and (d identification and discussion of uncertainties in the modeling process.

  19. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  20. What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses

    Doherty, Katie; Ciaranello, Andrea

    2013-01-01

    Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788

  1. Control designs and stability analyses for Helly’s car-following model

    Rosas-Jaimes, Oscar A.; Quezada-Téllez, Luis A.; Fernández-Anaya, Guillermo

    Car-following is an approach to understand traffic behavior restricted to pairs of cars, identifying a “leader” moving in front of a “follower”, which at the same time, it is assumed that it does not surpass to the first one. From the first attempts to formulate the way in which individual cars are affected in a road through these models, linear differential equations were suggested by author like Pipes or Helly. These expressions represent such phenomena quite well, even though they have been overcome by other more recent and accurate models. However, in this paper, we show that those early formulations have some properties that are not fully reported, presenting the different ways in which they can be expressed, and analyzing them in their stability behaviors. Pipes’ model can be extended to what it is known as Helly’s model, which is viewed as a more precise model to emulate this microscopic approach to traffic. Once established some convenient forms of expression, two control designs are suggested herein. These regulation schemes are also complemented with their respective stability analyses, which reflect some important properties with implications in real driving. It is significant that these linear designs can be very easy to understand and to implement, including those important features related to safety and comfort.

  2. Development of steady-state model for MSPT and detailed analyses of receiver

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  3. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  4. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  5. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  6. A diagnostic tree model for polytomous responses with multiple strategies.

    Ma, Wenchao

    2018-04-23

    Constructed-response items have been shown to be appropriate for cognitively diagnostic assessments because students' problem-solving procedures can be observed, providing direct evidence for making inferences about their proficiency. However, multiple strategies used by students make item scoring and psychometric analyses challenging. This study introduces the so-called two-digit scoring scheme into diagnostic assessments to record both students' partial credits and their strategies. This study also proposes a diagnostic tree model (DTM) by integrating the cognitive diagnosis models with the tree model to analyse the items scored using the two-digit rubrics. Both convergent and divergent tree structures are considered to accommodate various scoring rules. The MMLE/EM algorithm is used for item parameter estimation of the DTM, and has been shown to provide good parameter recovery under varied conditions in a simulation study. A set of data from TIMSS 2007 mathematics assessment is analysed to illustrate the use of the two-digit scoring scheme and the DTM. © 2018 The British Psychological Society.

  7. Modeling listeners' emotional response to music.

    Eerola, Tuomas

    2012-10-01

    An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.

  8. Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia

    Peter L. Oliver

    2011-01-01

    Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.

  9. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  10. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  11. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  12. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  13. Quantitative analyses at baseline and interim PET evaluation for response assessment and outcome definition in patients with malignant pleural mesothelioma

    Lopci, Egesta; Chiti, Arturo [Humanitas Research Hospital, Nuclear Medicine Department, Rozzano, Milan (Italy); Zucali, Paolo Andrea; Perrino, Matteo; Gianoncelli, Letizia; Lorenzi, Elena; Gemelli, Maria; Santoro, Armando [Humanitas Research Hospital, Oncology, Rozzano (Italy); Ceresoli, Giovanni Luca [Humanitas Gavazzeni, Oncology, Bergamo (Italy); Giordano, Laura [Humanitas Research Hospital, Biostatistics, Rozzano (Italy)

    2015-04-01

    Quantitative analyses on FDG PET for response assessment are increasingly used in clinical studies, particularly with respect to tumours in which radiological assessment is challenging and complete metabolic response is rarely achieved after treatment. A typical example is malignant pleural mesothelioma (MPM), an aggressive tumour originating from mesothelial cells of the pleura. We present our results concerning the use of semiquantitative and quantitative parameters, evaluated at the baseline and interim PET examinations, for the prediction of treatment response and disease outcome in patients with MPM. We retrospectively analysed data derived from 131 patients (88 men, 43 women; mean age 66 years) with MPM who were referred to our institution for treatment between May 2004 and July 2013. Patients were investigated using FDG PET at baseline and after two cycles of pemetrexed-based chemotherapy. Responses were determined using modified RECIST criteria based on the best CT response after treatment. Disease control rate, progression-free survival (PFS) and overall survival (OS) were calculated for the whole population and were correlated with semiquantitative and quantitative parameters evaluated at the baseline and interim PET examinations; these included SUV{sub max}, total lesion glycolysis (TLG), percentage change in SUV{sub max} (ΔSUV{sub max}) and percentage change in TLG (ΔTLG). Disease control was achieved in 84.7 % of the patients, and median PFS and OS for the entire cohort were 7.2 and 14.3 months, respectively. The log-rank test showed a statistically significant difference in PFS between patients with radiological progression and those with partial response (PR) or stable disease (SD) (1.8 vs. 8.6 months, p < 0.001). Baseline SUV{sub max} and TLG showed a statistically significant correlation with PFS and OS (p < 0.001). In the entire population, both ΔSUV{sub max} and ΔTLG were correlated with disease control based on best CT response (p < 0

  14. Influence of tyre-road contact model on vehicle vibration response

    Múčka, Peter; Gagnon, Louis

    2015-09-01

    The influence of the tyre-road contact model on the simulated vertical vibration response was analysed. Three contact models were compared: tyre-road point contact model, moving averaged profile and tyre-enveloping model. In total, 1600 real asphalt concrete and Portland cement concrete longitudinal road profiles were processed. The linear planar model of automobile with 12 degrees of freedom (DOF) was used. Five vibration responses as the measures of ride comfort, ride safety and dynamic load of cargo were investigated. The results were calculated as a function of vibration response, vehicle velocity, road quality and road surface type. The marked differences in the dynamic tyre forces and the negligible differences in the ride comfort quantities were observed among the tyre-road contact models. The seat acceleration response for three contact models and 331 DOF multibody model of the truck semi-trailer was compared with the measured response for a known profile of test section.

  15. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  16. Why do they not answer and do they really learn? A case study in analysing student response flows in introductory physics using an audience response system

    Jääskeläinen, Markku; Lagerkvist, Andreas

    2017-01-01

    In this paper we investigate teaching with a classroom response system in introductory physics with emphasis on two issues. First, we discuss retention between question rounds and the reasons why students avoid answering the question a second time. A question with declining response rate was followed by a question addressing the student reasons for not answering. We find that there appear to be several reasons for the observed decline, and that the students need to be reminded. We argue that small drops are unimportant as the process appears to work despite the drops. Second, we discuss the dynamics of learning in a concept-sequence in electromagnetism, where a majority of the students, despite poor statistics in a first round, manage to answer a followup question correctly. In addition, we analyse the response times for both situations to connect with research on student reasoning on situations with misconception-like answers. From the combination of the answer flows and response time behaviours we find it plausible that conceptual learning occurred during the discussion phase. (paper)

  17. IMPROVEMENTS IN HANFORD TRANSURANIC (TRU) PROGRAM UTILIZING SYSTEMS MODELING AND ANALYSES

    UYTIOCO EM

    2007-01-01

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Real-Time Radiography, Non-Destructive Assay, and Head Space Gas Sampling), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates

  18. Water requirements of short rotation poplar coppice: Experimental and modelling analyses across Europe

    Fischer, Milan; Zenone, T.; Trnka, Miroslav; Orság, Matěj; Montagnani, L.; Ward, E. J.; Tripathi, Abishek; Hlavinka, Petr; Seufert, G.; Žalud, Zdeněk; King, J.; Ceulemans, R.

    2018-01-01

    Roč. 250, MAR (2018), s. 343-360 ISSN 0168-1923 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:86652079 Keywords : energy-balance closure * dual crop coefficient * radiation use efficiency * simulate yield response * below-ground carbon * vs. 2nd rotation * flux data * biomass production * forest model * stand-scale * Bioenergy * Bowen ratio and energy balance * Crop coefficient * Eddy covariance * Evapotranspiration * Water balance Subject RIV: GC - Agronomy OBOR OECD: Agriculture Impact factor: 3.887, year: 2016

  19. Model Predictive Control based on Finite Impulse Response Models

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  20. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  1. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  2. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  3. Developing a system dynamics model to analyse environmental problem in construction site

    Haron, Fatin Fasehah; Hawari, Nurul Nazihah

    2017-11-01

    This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.

  4. Development and application of model RAIA uranium on-line analyser

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  5. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  6. A model for analysing factors which may influence quality management procedures in higher education

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  7. An application of the 'Bayesian cohort model' to nuclear power plant cost analyses

    Ono, Kenji; Nakamura, Takashi

    2002-01-01

    We have developed a new method for identifying the effects of calendar year, plant age and commercial operation starting year on the costs and performances of nuclear power plants and also developed an analysis system running on personal computers. The method extends the Bayesian cohort model for time series social survey data proposed by one of the authors. The proposed method was shown to be able to separate the above three effects more properly than traditional methods such as taking simple means by time domain. The analyses of US nuclear plant cost and performance data by using the proposed method suggest that many of the US plants spent relatively long time and much capital cost for modification at their age of about 10 to 20 years, but that, after those ages, they performed fairly well with lower and stabilized O and M and additional capital costs. (author)

  8. Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy

    Moeller, Bernd

    2011-01-01

    Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for

  9. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  10. Modeling Freight Ocean Rail and Truck Transportation Flows to Support Policy Analyses

    Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wang, Hao [Cornell Univ., Ithaca, NY (United States); Nozick, Linda Karen [Cornell Univ., Ithaca, NY (United States); Xu, Ningxiong [Cornell Univ., Ithaca, NY (United States)

    2017-11-01

    Freight transportation represents about 9.5% of GDP, is responsible for about 8% of greenhouse gas emissions and supports the import and export of about 3.6 trillion in international trade; hence it is important that our national freight transportation system is designed and operated efficiently and embodies user fees and other policies that balance costs and environmental consequences. Hence, this paper develops a mathematical model to estimate international and domestic freight flows across ocean, rail and truck modes which can be used to study the impacts of changes in our infrastructure as well as the imposition of new user fees and changes in operating policies. This model is applied to two case studies: (1) a disruption of the maritime ports at Los Angeles/Long Beach similar to the impacts that would be felt in an earthquake; and (2) implementation of new user fees at the California ports.

  11. Response of subassembly model with internals

    Kennedy, J.M.; Belytschko, T.

    1977-01-01

    Analytical tools have been developed and validated by controlled sets of experiments to understand the response of an accident and/or single subassembly in an LMFBR reasonably well. They have been subjected to a variety of loadings and boundary environments. Some large subassembly cluster experiments have been performed, however little analytical work has accompanied them because of the lack of suitable analytical tools. Reported are analytical approaches to: (1) development of more sophisiticated models for the subassembly internals, that is, the fuel pins and coolant; (2) development of models for representing three dimensional effects in subassemblies adjacent to the accident subassembly. These analytical developments will provide feasible capabilities for doing economical three-dimensional analysis not previously available

  12. Modeling of Cardiovascular Response to Weightlessness

    Sharp, M. Keith

    1999-01-01

    pressure and, to a limited extent, in extravascular and pedcardial hydrostatic pressure were investigated. A complete hydraulic model of the cardiovascular system was built and flown aboard the NASA KC-135 and a computer model was developed and tested in simulated microgravity. Results obtained with these models have confirmed that a simple lack of hydrostatic pressure within an artificial ventricle causes a decrease in stroke volume. When combined with the acute increase in ventricular pressure associated with the elimination of hydrostatic pressure within the vasculature and the resultant cephalad fluid shift with the models in the upright position, however, stroke volume increased in the models. Imposition of a decreased pedcardial pressure in the computer model and in a simplified hydraulic model increased stroke volume. Physiologic regional fluid shifting was also demonstrated by the models. The unifying parameter characterizing of cardiac response was diastolic ventricular transmural pressure (DVDELTAP) The elimination of intraventricular hydrostatic pressure in O-G decreased DVDELTAP stroke volume, while the elimination of intravascular hydrostatic pressure increased DVDELTAP and stroke volume in the upright posture, but reduced DVDELTAP and stroke volume in the launch posture. The release of gravity on the chest wall and its associated influence on intrathoracic pressure, simulated by a drop in extraventricular pressure4, increased DVDELTAP ans stroke volume.

  13. Response of subassembly model with internals

    Kennedy, J.M.; Belytschko, T.

    1977-01-01

    For the purpose of predicting the structural response in such accident environments, a program STRAW has been developed. This is a finite element program which can treat the structure-fluid system consisting of the coolant and the subassembly walls. Both material nonlinearities due to elastic-plastic response and geometric nonlinearities due to large displacements can be treated. The energy source can be represented either by a pressure-time history or an equation of state. Because of the lack of any simplifying symmetry in the geometry of the subassembly the program uses a quasi-three dimensional model. The cross section of the accident hexcan and the adjacent hexcan are modelled by a two-dimensional finite element mesh which represents the hexcan walls by flexural element and the internals by two-dimensional continuum elements. This mesh is coupled to a series of one-dimensional elements which represent the axial flow of the coolant and the longitudinal stiffness of the fuel pins and hexcan. The latter is of importance in the adjacent hexcan, for its lateral displacement is resisted entirely by this flexural behavior and its inertia. The adequacy of such quasi-three dimensional models has been examined by comparing the STRAW results against almost complete three-dimensonal analysis performed with the REXCAT program. In this program, the accident hexcan is represented in a true three-dimensional sense by plate-shell elements, whereas the internals are represented as axisymmetric. These comparisons indicate that the quasi-three-dimensional approach employed in STRAW is valid for a large range of pressure time histories; the fidelity of this model suffers primarily when pressure reaches a peak over a very short time, such as 5-10 microseconds

  14. Ovine model for studying pulmonary immune responses

    Joel, D.D.; Chanana, A.D.

    1984-01-01

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with 125 I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables

  15. Ovine model for studying pulmonary immune responses

    Joel, D.D.; Chanana, A.D.

    1984-11-25

    Anatomical features of the sheep lung make it an excellent model for studying pulmonary immunity. Four specific lung segments were identified which drain exclusively to three separate lymph nodes. One of these segments, the dorsal basal segment of the right lung, is drained by the caudal mediastinal lymph node (CMLN). Cannulation of the efferent lymph duct of the CMLN along with highly localized intrabronchial instillation of antigen provides a functional unit with which to study factors involved in development of pulmonary immune responses. Following intrabronchial immunization there was an increased output of lymphoblasts and specific antibody-forming cells in efferent CMLN lymph. Continuous divergence of efferent lymph eliminated the serum antibody response but did not totally eliminate the appearance of specific antibody in fluid obtained by bronchoalveolar lavage. In these studies localized immunization of the right cranial lobe served as a control. Efferent lymphoblasts produced in response to intrabronchial antigen were labeled with /sup 125/I-iododeoxyuridine and their migrational patterns and tissue distribution compared to lymphoblasts obtained from the thoracic duct. The results indicated that pulmonary immunoblasts tend to relocate in lung tissue and reappear with a higher specific activity in pulmonary lymph than in thoracic duct lymph. The reverse was observed with labeled intestinal lymphoblasts. 35 references, 2 figures, 3 tables.

  16. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  17. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  18. Using Response Times to Assess Learning Progress: A Joint Model for Responses and Response Times

    Wang, Shiyu; Zhang, Susu; Douglas, Jeff; Culpepper, Steven

    2018-01-01

    Analyzing students' growth remains an important topic in educational research. Most recently, Diagnostic Classification Models (DCMs) have been used to track skill acquisition in a longitudinal fashion, with the purpose to provide an estimate of students' learning trajectories in terms of the change of fine-grained skills overtime. Response time…

  19. Grid Integration of Aggregated Demand Response, Part 2: Modeling Demand Response in a Production Cost Model

    Hummon, Marissa [National Renewable Energy Lab. (NREL), Golden, CO (United States); Palchak, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Denholm, Paul [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jorgenson, Jennie [National Renewable Energy Lab. (NREL), Golden, CO (United States); Olsen, Daniel J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kiliccote, Sila [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Matson, Nance [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sohn, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rose, Cody [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dudley, Junqiao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Goli, Sasank [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ma, Ookie [U.S. Dept. of Energy, Washington, DC (United States)

    2013-12-01

    This report is one of a series stemming from the U.S. Department of Energy (DOE) Demand Response and Energy Storage Integration Study. This study is a multi-national-laboratory effort to assess the potential value of demand response (DR) and energy storage to electricity systems with different penetration levels of variable renewable resources and to improve our understanding of associatedmarkets and institutions. This report implements DR resources in the commercial production cost model PLEXOS.

  20. Prediction Models for Dynamic Demand Response

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  1. GSHR, a Web-Based Platform Provides Gene Set-Level Analyses of Hormone Responses in Arabidopsis

    Xiaojuan Ran

    2018-01-01

    Full Text Available Phytohormones regulate diverse aspects of plant growth and environmental responses. Recent high-throughput technologies have promoted a more comprehensive profiling of genes regulated by different hormones. However, these omics data generally result in large gene lists that make it challenging to interpret the data and extract insights into biological significance. With the rapid accumulation of theses large-scale experiments, especially the transcriptomic data available in public databases, a means of using this information to explore the transcriptional networks is needed. Different platforms have different architectures and designs, and even similar studies using the same platform may obtain data with large variances because of the highly dynamic and flexible effects of plant hormones; this makes it difficult to make comparisons across different studies and platforms. Here, we present a web server providing gene set-level analyses of Arabidopsis thaliana hormone responses. GSHR collected 333 RNA-seq and 1,205 microarray datasets from the Gene Expression Omnibus, characterizing transcriptomic changes in Arabidopsis in response to phytohormones including abscisic acid, auxin, brassinosteroids, cytokinins, ethylene, gibberellins, jasmonic acid, salicylic acid, and strigolactones. These data were further processed and organized into 1,368 gene sets regulated by different hormones or hormone-related factors. By comparing input gene lists to these gene sets, GSHR helped to identify gene sets from the input gene list regulated by different phytohormones or related factors. Together, GSHR links prior information regarding transcriptomic changes induced by hormones and related factors to newly generated data and facilities cross-study and cross-platform comparisons; this helps facilitate the mining of biologically significant information from large-scale datasets. The GSHR is freely available at http://bioinfo.sibs.ac.cn/GSHR/.

  2. Atmospheric dispersion modeling: Challenges of the Fukushima Daiichi response

    Sugiyama, Gayle [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nasstrom, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pobanz, Brenda [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Foster, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vogt, Phil [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Aluzzi, Fernando [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Homann, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-05-01

    In this research, the U.S. Department of Energy’s (DOE) National Atmospheric Release Advisory Center (NARAC) provided a wide range of predictions and analyses as part of the response to the Fukushima Daiichi Nuclear Power Plant accident including: daily Japanese weather forecasts and atmospheric transport predictions to inform planning for field monitoring operations and to provide U.S. government agencies with ongoing situational awareness of meteorological conditions; estimates of possible dose in Japan based on hypothetical U.S. Nuclear Regulatory Commission scenarios of potential radionuclide releases to support protective action planning for U.S. citizens; predictions of possible plume arrival times and dose levels at U.S. locations; and source estimation and plume model refinement based on atmospheric dispersion modeling and available monitoring data.

  3. The Value of Response Times in Item Response Modeling

    Molenaar, Dylan

    2015-01-01

    A new and very interesting approach to the analysis of responses and response times is proposed by Goldhammer (this issue). In his approach, differences in the speed-ability compromise within respondents are considered to confound the differences in ability between respondents. These confounding effects of speed on the inferences about ability can…

  4. Modeling and analysing storage systems in agricultural biomass supply chain for cellulosic ethanol production

    Ebadian, Mahmood; Sowlati, Taraneh; Sokhansanj, Shahab; Townley-Smith, Lawrence; Stumborg, Mark

    2013-01-01

    Highlights: ► Studied the agricultural biomass supply chain for cellulosic ethanol production. ► Evaluated the impact of storage systems on different supply chain actors. ► Developed a combined simulation/optimization model to evaluate storage systems. ► Compared two satellite storage systems with roadside storage in terms of costs and emitted CO 2 . ► SS would lead to a more cost-efficient supply chain compared to roadside storage. -- Abstract: In this paper, a combined simulation/optimization model is developed to better understand and evaluate the impact of the storage systems on the costs incurred by each actor in the agricultural biomass supply chain including farmers, hauling contractors and the cellulosic ethanol plant. The optimization model prescribes the optimum number and location of farms and storages. It also determines the supply radius, the number of farms required to secure the annual supply of biomass and also the assignment of farms to storage locations. Given the specific design of the supply chain determined by the optimization model, the simulation model determines the number of required machines for each operation, their daily working schedule and utilization rates, along with the capacities of storages. To evaluate the impact of the storage systems on the delivered costs, three storage systems are molded and compared: roadside storage (RS) system and two satellite storage (SS) systems including SS with fixed hauling distance (SF) and SS with variable hauling distance (SV). In all storage systems, it is assumed the loading equipment is dedicated to storage locations. The obtained results from a real case study provide detailed cost figures for each storage system since the developed model analyses the supply chain on an hourly basis and considers time-dependence and stochasticity of the supply chain. Comparison of the storage systems shows SV would outperform SF and RS by reducing the total delivered cost by 8% and 6%, respectively

  5. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  6. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  7. A model for asymmetric ballooning and analyses of ballooning behaviour of single rods with probabilistic methods

    Keusenhoff, J.G.; Schubert, J.D.; Chakraborty, A.K.

    1985-01-01

    Plastic deformation behaviour of Zircaloy cladding has been extensively examined in the past and can be described best by a model for asymmetric deformation. Slight displacement between the pellet and cladding will always exist and this will lead to the formation of azimuthal temperature differences. The ballooning process is strongly temperature dependent and, as a result of the built up temperature differences, differing deformation behaviours along the circumference of the cladding result. The calculated ballooning of cladding is mainly influenced by its temperature, the applied burst criterion and the parameters used in the deformation model. All these influencing parameters possess uncertainties. In order to quantify these uncertainties and to estimate distribution functions of important parameters such as temperature and deformation the response surface method was applied. For a hot rod the calculated standard deviation of cladding temperature amounts to 50 K. From this high value the large influence of the external cooling conditions on the deformation and burst behaviour of cladding can be estimated. In an additional statistical examination the parameters of deformation and burst models have been included and their influence on the deformation of the rod has been studied. (author)

  8. Analyses of Potential Predictive Markers and Response to Targeted Therapy in Patients with Advanced Clear-cell Renal Cell Carcinoma

    Yan Song

    2015-01-01

    Full Text Available Background: Vascular endothelial growth factor-targeted agents are standard treatments in advanced clear-cell renal cell carcinoma (ccRCC, but biomarkers of activity are lacking. The aim of this study was to investigate the association of Von Hippel-Lindau (VHL gene status, vascular endothelial growth factor receptor (VEGFR or stem cell factor receptor (KIT expression, and their relationships with characteristics and clinical outcome of advanced ccRCC. Methods: A total of 59 patients who received targeted treatment with sunitinib or pazopanib were evaluated for determination at Cancer Hospital and Institute, Chinese Academy of Medical Sciences between January 2010 and November 2012. Paraffin-embedded tumor samples were collected and status of the VHL gene and expression of VEGFR and KIT were determined by VHL sequence analysis and immunohistochemistry. Clinical-pathological features were collected and efficacy such as response rate and Median progression-free survival (PFS and overall survival (OS were calculated and then compared based on expression status. The Chi-square test, the Kaplan-Meier method, and the Lon-rank test were used for statistical analyses. Results: Of 59 patients, objective responses were observed in 28 patients (47.5%. The median PFS was 13.8 months and median OS was 39.9 months. There was an improved PFS in patients with the following clinical features: Male gender, number of metastatic sites 2 or less, VEGFR-2 positive or KIT positive. Eleven patients (18.6% had evidence of VHL mutation, with an objective response rate of 45.5%, which showed no difference with patients with no VHL mutation (47.9%. VHL mutation status did not correlate with either overall response rate (P = 0.938 or PFS (P = 0.277. The PFS was 17.6 months and 22.2 months in VEGFR-2 positive patients and KIT positive patients, respectively, which was significantly longer than that of VEGFR-2 or KIT negative patients (P = 0.026 and P = 0.043. Conclusion

  9. Constitutive modeling of shock response of PTFE

    Brown, Eric N [Los Alamos National Laboratory; Reanyansky, Anatoly D [DSTO, AUSTRALIA; Bourne, Neil K [AWE, UK; Millett, Jeremy C F [AWE, UK

    2009-01-01

    The PTFE (polytetrafluoroethylene) material is complex and attracts attention of the shock physics researchers because it has amorphous and crystalline components. In turn, the crystalline component has four known phases with the high pressure transition to phase III. At the same time, as has been recently studied using spectrometry, the crystalline region is growing with load. Stress and velocity shock-wave profiles acquired recently with embedded gauges demonstrate feature that may be related to impedance mismatches between the regions subjected to some transitions resulting in density and modulus variations. We consider the above mentioned amorphous-to-crystalline transition and the high pressure Phase II-to-III transitions as possible candidates for the analysis. The present work utilizes a multi-phase rate sensitive model to describe shock response of the PTFE material. One-dimensional experimental shock wave profiles are compared with calculated profiles with the kinetics describing the transitions. The objective of this study is to understand the role of the various transitions in the shock response of PTFE.

  10. Systems-wide analyses of mucosal immune responses to Helicobacter pylori at the interface between pathogenicity and symbiosis

    Kronsteiner, Barbara; Bassaganya-Riera, Josep; Philipson, Casandra; Viladomiu, Monica; Carbo, Adria; Abedi, Vida; Hontecillas, Raquel

    2016-01-01

    Abstract Helicobacter pylori is the dominant member of the gastric microbiota in over half of the human population of which 5–15% develop gastritis or gastric malignancies. Immune responses to H. pylori are characterized by mixed T helper cell, cytotoxic T cell and NK cell responses. The presence of Tregs is essential for the control of gastritis and together with regulatory CX3CR1+ mononuclear phagocytes and immune-evasion strategies they enable life-long persistence of H. pylori. This H. pylori-induced regulatory environment might contribute to its cross-protective effect in inflammatory bowel disease and obesity. Here we review host-microbe interactions, the development of pro- and anti-inflammatory immune responses and how the latter contribute to H. pylori's role as beneficial member of the gut microbiota. Furthermore, we present the integration of existing and new data into a computational/mathematical model and its use for the investigation of immunological mechanisms underlying initiation, progression and outcomes of H. pylori infection. PMID:26939848

  11. A Mathematical Model of Cardiovascular Response to Dynamic Exercise

    Magosso, E

    2001-01-01

    A mathematical model of cardiovascular response to dynamic exercise is presented, The model includes the pulsating heart, the systemic and pulmonary, circulation, a functional description of muscle...

  12. Microsegregation in multicomponent alloy analysed by quantitative phase-field model

    Ohno, M; Takaki, T; Shibuta, Y

    2015-01-01

    Microsegregation behaviour in a ternary alloy system has been analysed by means of quantitative phase-field (Q-PF) simulations with a particular attention directed at an influence of tie-line shift stemming from different liquid diffusivities of the solute elements. The Q-PF model developed for non-isothermal solidification in multicomponent alloys with non-zero solid diffusivities was applied to analysis of microsegregation in a ternary alloy consisting of fast and slow diffusing solute elements. The accuracy of the Q-PF simulation was first verified by performing the convergence test of segregation ratio with respect to the interface thickness. From one-dimensional analysis, it was found that the microsegregation of slow diffusing element is reduced due to the tie-line shift. In two-dimensional simulations, the refinement of microstructure, viz., the decrease of secondary arms spacing occurs at low cooling rates due to the formation of diffusion layer of slow diffusing element. It yields the reductions of degrees of microsegregation for both the fast and slow diffusing elements. Importantly, in a wide range of cooling rates, the degree of microsegregation of the slow diffusing element is always lower than that of the fast diffusing element, which is entirely ascribable to the influence of tie-line shift. (paper)

  13. Modelling and Analysing Access Control Policies in XACML 3.0

    Ramli, Carroline Dewi Puspa Kencana

    (c.f. GM03,Mos05,Ris13) and manual analysis of the overall effect and consequences of a large XACML policy set is a very daunting and time-consuming task. In this thesis we address the problem of understanding the semantics of access control policy language XACML, in particular XACML version 3.0....... The main focus of this thesis is modelling and analysing access control policies in XACML 3.0. There are two main contributions in this thesis. First, we study and formalise XACML 3.0, in particular the Policy Decision Point (PDP). The concrete syntax of XACML is based on the XML format, while its standard...... semantics is described normatively using natural language. The use of English text in standardisation leads to the risk of misinterpretation and ambiguity. In order to avoid this drawback, we define an abstract syntax of XACML 3.0 and a formal XACML semantics. Second, we propose a logic-based XACML analysis...

  14. TIDALLY HEATED TERRESTRIAL EXOPLANETS: VISCOELASTIC RESPONSE MODELS

    Henning, Wade G.; O'Connell, Richard J.; Sasselov, Dimitar D.

    2009-01-01

    Tidal friction in exoplanet systems, driven by orbits that allow for durable nonzero eccentricities at short heliocentric periods, can generate internal heating far in excess of the conditions observed in our own solar system. Secular perturbations or a notional 2:1 resonance between a hot Earth and hot Jupiter can be used as a baseline to consider the thermal evolution of convecting bodies subject to strong viscoelastic tidal heating. We compare results first from simple models using a fixed Quality factor and Love number, and then for three different viscoelastic rheologies: the Maxwell body, the Standard Anelastic Solid (SAS), and the Burgers body. The SAS and Burgers models are shown to alter the potential for extreme tidal heating by introducing the possibility of new equilibria and multiple response peaks. We find that tidal heating tends to exceed radionuclide heating at periods below 10-30 days, and exceed insolation only below 1-2 days. Extreme cases produce enough tidal heat to initiate global-scale partial melting, and an analysis of tidal limiting mechanisms such as advective cooling for earthlike planets is discussed. To explore long-term behaviors, we map equilibria points between convective heat loss and tidal heat input as functions of eccentricity. For the periods and magnitudes discussed, we show that tidal heating, if significant, is generally detrimental to the width of habitable zones.

  15. Mortality from non‐malignant respiratory diseases among people with silicosis in Hong Kong: exposure–response analyses for exposure to silica dust

    Tse, L A; Yu, I T S; Leung, C C; Tam, W; Wong, T W

    2007-01-01

    Objectives To examine the exposure–response relationships between various indices of exposure to silica dust and the mortality from non‐malignant respiratory diseases (NMRDs) or chronic obstructive pulmonary diseases (COPDs) among a cohort of workers with silicosis in Hong Kong. Methods The concentrations of respirable silica dust were assigned to each industry and job task according to historical industrial hygiene measurements documented previously in Hong Kong. Exposure indices included cumulative dust exposure (CDE) and mean dust concentration (MDC). Penalised smoothing spline models were used as a preliminary step to detect outliers and guide further analyses. Multiple Cox's proportional hazard models were used to estimate the dust effects on the risk of mortality from NMRDs or COPDs after truncating the highest exposures. Results 371 of the 853 (43.49%) deaths occurring among 2789 workers with silicosis during 1981–99 were from NMRDs, and 101 (27.22%) NMRDs were COPDs. Multiple Cox's proportional hazard models showed that CDE (p = 0.009) and MDC (pcaisson workers and among those ever employed in other occupations with high exposure to silica dust. No exposure–response relationship was observed for surface construction workers with low exposures. A clear upward trend for both NMRDs and COPDs mortality was found with increasing severity of radiological silicosis. Conclusion This study documented an exposure–response relationship between exposure to silica dust and the risk of death from NMRDs or COPDs among workers with silicosis, except for surface construction workers with low exposures. The risk of mortality from NMRDs increased significantly with the progression of International Labor Organization categories, independent of dust effects. PMID:16973737

  16. Mortality from non-malignant respiratory diseases among people with silicosis in Hong Kong: exposure-response analyses for exposure to silica dust.

    Tse, L A; Yu, I T S; Leung, C C; Tam, W; Wong, T W

    2007-02-01

    To examine the exposure-response relationships between various indices of exposure to silica dust and the mortality from non-malignant respiratory diseases (NMRDs) or chronic obstructive pulmonary diseases (COPDs) among a cohort of workers with silicosis in Hong Kong. The concentrations of respirable silica dust were assigned to each industry and job task according to historical industrial hygiene measurements documented previously in Hong Kong. Exposure indices included cumulative dust exposure (CDE) and mean dust concentration (MDC). Penalised smoothing spline models were used as a preliminary step to detect outliers and guide further analyses. Multiple Cox's proportional hazard models were used to estimate the dust effects on the risk of mortality from NMRDs or COPDs after truncating the highest exposures. 371 of the 853 (43.49%) deaths occurring among 2789 workers with silicosis during 1981-99 were from NMRDs, and 101 (27.22%) NMRDs were COPDs. Multiple Cox's proportional hazard models showed that CDE (p = 0.009) and MDC (pcaisson workers and among those ever employed in other occupations with high exposure to silica dust. No exposure-response relationship was observed for surface construction workers with low exposures. A clear upward trend for both NMRDs and COPDs mortality was found with increasing severity of radiological silicosis. This study documented an exposure-response relationship between exposure to silica dust and the risk of death from NMRDs or COPDs among workers with silicosis, except for surface construction workers with low exposures. The risk of mortality from NMRDs increased significantly with the progression of International Labor Organization categories, independent of dust effects.

  17. Investigation on the Cyclic Response of Superelastic Shape Memory Alloy (SMA Slit Damper Devices Simulated by Quasi-Static Finite Element (FE Analyses

    Jong Wan Hu

    2014-02-01

    Full Text Available In this paper, the superelastic shape memory alloy (SMA slit damper system as an alternative design approach for steel structures is intended to be evaluated with respect to inelastic behavior simulated by refined finite element (FE analyses. Although the steel slit dampers conventionally used for aseismic design are able to dissipate a considerable amount of energy generated by the plastic yielding of the base materials, large permanent deformation may occur in the entire structure. After strong seismic events, extra damage repair costs are required to restore the original configuration and to replace defective devices with new ones. Innovative slit dampers fabricated by superelastic SMAs that automatically recover their initial conditions only by the removal of stresses without heat treatment are introduced with a view toward mitigating the problem of permanent deformation. The cyclically tested FE models are calibrated to experimental results for the purpose of predicting accurate behavior. This study also focuses on the material constitutive model that is able to reproduce the inherent behavior of superelastic SMA materials by taking phase transformation between austenite and martensite into consideration. The responses of SMA slit dampers are compared to those of steel slit dampers. Axial stress and strain components are also investigated on the FE models under cyclic loading in an effort to validate the adequacy of FE modeling and then to compare between two slit damper systems. It can be shown that SMA slit dampers exhibit many structural advantages in terms of ultimate strength, moderate energy dissipation and recentering capability.

  18. Insight into the potential for DNA idiotypic fusion vaccines designed for patients by analysing xenogeneic anti-idiotypic antibody responses

    Forconi, Francesco; King, Catherine A; Sahota, Surinder S; Kennaway, Christopher K; Russell, Nigel H; Stevenson, Freda K

    2002-01-01

    DNA vaccines induce immune responses against encoded proteins, and have clear potential for cancer vaccines. For B-cell tumours, idiotypic (Id) immunoglobulin encoded by the variable region genes provides a target antigen. When assembled as single chain Fv (scFv), and fused to an immunoenhancing sequence from tetanus toxin (TT), DNA fusion vaccines induce anti-Id antibodies. In lymphoma models, these antibodies have a critical role in mediating protection. For application to patients with lymphoma, two questions arise: first, whether pre-existing antibody against TT affects induction of anti-scFv antibodies; second, whether individual human scFv fusion sequences are able to fold consistently to generate antibodies able to recognize private conformational Id determinants expressed by tumour cells. Using xenogeneic vaccination with scFv sequences from four patients, we have shown that pre-existing anti-TT immunity slows, but does not prevent, anti-Id antibody responses. To determine folding, we have monitored the ability of nine DNAscFv–FrC patients' vaccines to induce xenogeneic anti-Id antibodies. Antibodies were induced in all cases, and were strikingly specific for each patient's immunoglobulin with little cross-reactivity between patients, even when similar VH or VL genes were involved. Blocking experiments with human serum confirmed reactivity against private determinants in 26–97% of total antibody. Both immunoglobulin G1 (IgG1) and IgG2a subclasses were present at 1·3 : 1–15 : 1 consistent with a T helper 2-dominated response. Xenogeneic vaccination provides a simple route for testing individual patients' DNAscFv–FrC fusion vaccines, and offers a strategy for production of anti-Id antibodies. The findings underpin the approach of DNA idiotypic fusion vaccination for patients with B-cell tumours. PMID:12225361

  19. VOC composition of current motor vehicle fuels and vapors, and collinearity analyses for receptor modeling.

    Chin, Jo-Yu; Batterman, Stuart A

    2012-03-01

    The formulation of motor vehicle fuels can alter the magnitude and composition of evaporative and exhaust emissions occurring throughout the fuel cycle. Information regarding the volatile organic compound (VOC) composition of motor fuels other than gasoline is scarce, especially for bioethanol and biodiesel blends. This study examines the liquid and vapor (headspace) composition of four contemporary and commercially available fuels: gasoline (gasoline), ultra-low sulfur diesel (ULSD), and B20 (20% soy-biodiesel and 80% ULSD). The composition of gasoline and E85 in both neat fuel and headspace vapor was dominated by aromatics and n-heptane. Despite its low gasoline content, E85 vapor contained higher concentrations of several VOCs than those in gasoline vapor, likely due to adjustments in its formulation. Temperature changes produced greater changes in the partial pressures of 17 VOCs in E85 than in gasoline, and large shifts in the VOC composition. B20 and ULSD were dominated by C(9) to C(16)n-alkanes and low levels of the aromatics, and the two fuels had similar headspace vapor composition and concentrations. While the headspace composition predicted using vapor-liquid equilibrium theory was closely correlated to measurements, E85 vapor concentrations were underpredicted. Based on variance decomposition analyses, gasoline and diesel fuels and their vapors VOC were distinct, but B20 and ULSD fuels and vapors were highly collinear. These results can be used to estimate fuel related emissions and exposures, particularly in receptor models that apportion emission sources, and the collinearity analysis suggests that gasoline- and diesel-related emissions can be distinguished. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    Kolkman, M. J.; Kok, M.; van der Veen, A.

    , uncertainty and disagreement) can be positioned in the framework, as can the communities of knowledge construction and valuation involved in the solution of these problems (core science, applied science, and professional consultancy, and “post-normal” science). Mental model maps, this research hypothesises, are suitable to analyse the above aspects of the problem. This hypothesis is tested for the case of the Zwolle storm surch barrier. Analysis can aid integration between disciplines, participation of public stakeholders, and can stimulate learning processes. Mental model mapping is recommended to visualise the use of knowledge, to analyse difficulties in problem solving process, and to aid information transfer and communication. Mental model mapping help scientists to shape their new, post-normal responsibilities in a manner that complies with integrity when dealing with unstructured problems in complex, multifunctional systems.

  1. Researches on modeling of nuclear power plants for dynamic response analysis

    Watabe, M.; Fukuzawa, R.; Chiba, O.; Toritani, T.

    1983-01-01

    The authors tried to establish the rational and economical model due to the vertical component considering the dynamic soil-structure interaction effects and the flexibility of the mat foundation. Three types of models were introduced. 1) Finite element model. Two cases of response analyses due to harmonic excitations with the finite element model were performed in which the mat foundation was treated rigid and elastic body. The dynamic soil-structure interaction effects were evaluated based on the condition that soil was semiinfinite elastic medium. 2) Sophisticated mass-spring-dashpot model. Two cases of response analyses due to harmonic excitations were performed to simulate the dynamic characteristics of the finite element models mentioned above using the sophisticated mass-spring-dashpot model, in which the dynamic soil-structure interaction effects were evaluated with the same procedure applied to the finite element model. 3) Simplified mass-spring-dashpot model. There were introduced three types of the simplified mass-spring-dashpot model in which the dynamic soil-structure interaction effects were simplified. Response analyses due to harmonic excitations and earthquake ground motions were performed in order to establish the rational and economical model. (orig./HP)

  2. Three-dimensional finite element model for flexible pavement analyses based field modulus measurements

    Lacey, G.; Thenoux, G.; Rodriguez-Roa, F.

    2008-01-01

    In accordance with the present development of empirical-mechanistic tools, this paper presents an alternative to traditional analysis methods for flexible pavements using a three-dimensional finite element formulation based on a liner-elastic perfectly-plastic Drucker-Pager model for granular soil layers and a linear-elastic stress-strain law for the asphalt layer. From the sensitivity analysis performed, it was found that variations of +-4 degree in the internal friction angle of granular soil layers did not significantly affect the analyzed pavement response. On the other hand, a null dilation angle is conservatively proposed for design purposes. The use of a Light Falling Weight Deflectometer is also proposed as an effective and practical tool for on-site elastic modulus determination of granular soil layers. However, the stiffness value obtained from the tested layer should be corrected when the measured peak deflection and the peak force do not occur at the same time. In addition, some practical observations are given to achieve successful field measurements. The importance of using a 3D FE analysis to predict the maximum tensile strain at the bottom of the asphalt layer (related to pavement fatigue) and the maximum vertical comprehensive strain transmitted to the top of the granular soil layers (related to rutting) is also shown. (author)

  3. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  4. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  5. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  6. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  7. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-01-01

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1

  8. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  9. Short-term pressure and temperature MSLB response analyses for large dry containment of the Maanshan nuclear power station

    Dai, Liang-Che, E-mail: lcdai@iner.gov.tw; Chen, Yen-Shu; Yuann, Yng-Ruey

    2014-12-15

    Highlights: • The GOTHIC code is used for the PWR dry containment pressure and temperature analysis. • Boundary conditions are hot standby and 102% power main steam line break accidents. • Containment pressure and temperature responses of GOTHIC are similar with FSAR. • The capability of the developed model to perform licensing calculation is assessed. - Abstract: Units 1 and 2 of the Maanshan nuclear power station are the typical Westinghouse three-loop PWR (pressurized water reactor) with large dry containments. In this study, the containment analysis program GOTHIC is adopted for the dry containment pressure and temperature analysis. Free air space and sump of the PWR dry containment are individually modeled as control volumes. The containment spray system and fan cooler unit are also considered in the GOTHIC model. The blowdown mass and energy data of the main steam line break (hot standby condition and various reactor thermal power levels) are tabulated in the Maanshan Final Safety Analysis Report (FSAR) 6.2 which could be used as the boundary conditions for the containment model. The calculated containment pressure and temperature behaviors of the selected cases are in good agreement with the FSAR results. In this study, hot standby and 102% reactor thermal power main steam line break accidents are selected. The calculated peak containment pressure is 323.50 kPag (46.92 psig) for hot standby MSLB, which is a little higher than the FSAR value of 311.92 kPag (45.24 psig). But it is still below the design value of 413.69 kPag (60 psig). The calculated peak vapor temperature inside the containment is 187.0 °C (368.59 F) for 102% reactor thermal power MSLB, which is lower than the FSAR result of 194.42 °C (381.95 F). The effects of the containment spray system and fan cooler units could be clearly observed in the GOTHIC analysis. The calculated containment pressure and temperature behaviors of the selected cases are in good agreement with the FSAR

  10. Short-term pressure and temperature MSLB response analyses for large dry containment of the Maanshan nuclear power station

    Dai, Liang-Che; Chen, Yen-Shu; Yuann, Yng-Ruey

    2014-01-01

    Highlights: • The GOTHIC code is used for the PWR dry containment pressure and temperature analysis. • Boundary conditions are hot standby and 102% power main steam line break accidents. • Containment pressure and temperature responses of GOTHIC are similar with FSAR. • The capability of the developed model to perform licensing calculation is assessed. - Abstract: Units 1 and 2 of the Maanshan nuclear power station are the typical Westinghouse three-loop PWR (pressurized water reactor) with large dry containments. In this study, the containment analysis program GOTHIC is adopted for the dry containment pressure and temperature analysis. Free air space and sump of the PWR dry containment are individually modeled as control volumes. The containment spray system and fan cooler unit are also considered in the GOTHIC model. The blowdown mass and energy data of the main steam line break (hot standby condition and various reactor thermal power levels) are tabulated in the Maanshan Final Safety Analysis Report (FSAR) 6.2 which could be used as the boundary conditions for the containment model. The calculated containment pressure and temperature behaviors of the selected cases are in good agreement with the FSAR results. In this study, hot standby and 102% reactor thermal power main steam line break accidents are selected. The calculated peak containment pressure is 323.50 kPag (46.92 psig) for hot standby MSLB, which is a little higher than the FSAR value of 311.92 kPag (45.24 psig). But it is still below the design value of 413.69 kPag (60 psig). The calculated peak vapor temperature inside the containment is 187.0 °C (368.59 F) for 102% reactor thermal power MSLB, which is lower than the FSAR result of 194.42 °C (381.95 F). The effects of the containment spray system and fan cooler units could be clearly observed in the GOTHIC analysis. The calculated containment pressure and temperature behaviors of the selected cases are in good agreement with the FSAR

  11. Modeling Rabbit Responses to Single and Multiple Aerosol ...

    Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev

  12. Lichen Parmelia sulcata time response model to environmental elemental availability

    Reis, M.A.; Alves, L.C.; Freitas, M.C.; Os, B. van; Wolterbeek, H.Th.

    2000-01-01

    Transplants of lichen Parmelia sulcata collected in an area previously identified as non polluted, were placed at six stations, five of which were near Power Plants and the other in an area expected to be a remote station. Together with the lichen transplants, two total deposition collection buckets and an aerosol sampler were installed. Lichens were recollected two every month from each station. At the same time the water collection buckets were replaced by new ones. The aerosol sampler filter was replaced every week, collection being effective only for 10 minutes out of every two hours; in the remote station aerosol filters were replaced only once a month, the collection rate being kept. Each station was run for a period of one year. Both lichens and aerosol filters were analysed by PIXE and INAA at ITN. Total deposition samples were dried under an infrared lamp, and afterwards acid digested and analysed by ICP-MS at the National Geological Survey of The Netherlands. Data for the three types of samples were then produced for a total of 16 elements. In this work we used the data set thus obtained to test a model for the time response of lichen Parmelia sulcata to a new environment. (author)

  13. The care of Filipino juvenile offenders in residential facilities evaluated using the risk-need-responsivity model

    Spruit, A.; Wissink, I.B.; Stams, G.J.J.M.

    According to the risk-need-responsivity model of offender, assessment and rehabilitation treatment should target specific factors that are related to re-offending. This study evaluates the residential care of Filipino juvenile offenders using the risk-need-responsivity model. Risk analyses and

  14. Response margins investigation of piping dynamic analyses using the independent support motion method and PVRC [Pressure Vessel Research Committee] damping

    Bezler, P.; Wang, Y.K.; Reich, M.

    1988-03-01

    An evaluation of Independent Support Motion (ISM) response spectrum methods of analysis coupled with the Pressure Vessel Research Committee (PVRC) recommendation for damping, to compute the dynamic component of the seismic response of piping systems, was completed. Response estimates for five piping/structural systems were developed using fourteen variants of the ISM response spectrum method, the Uniform Support Motions response spectrum method and the ISM time history analysis method, all based on the PVRC recommendations for damping. The ISM/PVRC calculational procedures were found to exhibit orderly characteristics with levels of conservatism comparable to those obtained with the ISM/uniform damping procedures. Using the ISM/PVRC response spectrum method with absolute combination between group contributions provided consistently conservative results while using the ISM/PVRC response spectrum method with square root sum of squares combination between group contributions provided estimates of response which were deemed to be acceptable

  15. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  16. Inverse analyses of effective diffusion parameters relevant for a two-phase moisture model of cementitious materials

    Addassi, Mouadh; Johannesson, Björn; Wadsö, Lars

    2018-01-01

    Here we present an inverse analyses approach to determining the two-phase moisture transport properties relevant to concrete durability modeling. The purposed moisture transport model was based on a continuum approach with two truly separate equations for the liquid and gas phase being connected...... test, and, (iv) capillary suction test. Mass change over time, as obtained from the drying test, the two different cup test intervals and the capillary suction test, was used to obtain the effective diffusion parameters using the proposed inverse analyses approach. The moisture properties obtained...

  17. Response Modelling of Bitumen, Bituminous Mastic and Mortar

    Woldekidan, M.F.

    2011-01-01

    This research focuses on testing and modelling the viscoelastic response of bituminous binders. The main goal is to find an appropriate response model for bituminous binders. The desired model should allow implementation into numerical environments such as ABAQUS. On the basis of such numerical

  18. Bayes factor covariance testing in item response models

    Fox, J.P.; Mulder, J.; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  19. Bayes Factor Covariance Testing in Item Response Models

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  20. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  1. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)

    1994-12-31

    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  2. From intermediate to final behavioral endpoints : Modeling cognitions in (cost-)effectiveness analyses in health promotion

    Prenger, Hendrikje Cornelia

    2012-01-01

    Cost-effectiveness analyses (CEAs) are considered an increasingly important tool in health promotion and psychology. In health promotion adequate effectiveness data of innovative interventions are often lacking. In case of many promising interventions the available data are inadequate for CEAs due

  3. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final report

    Du, Q.

    1997-01-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. The work so far has focused on mezoscale models as typified by the celebrated Ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models they have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations

  4. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final technical report

    Gunzburger, M.D.; Peterson, J.S.

    1998-01-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. Their work has focused on mezoscale models as typified by the celebrated ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models the authors have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-Landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic Ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations

  5. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives

    In Rwanda, land degradation contributes to the

  6. Scalable Coupling of Multiscale AEH and PARADYN Analyses for Impact Modeling

    Valisetty, Rama R; Chung, Peter W; Namburu, Raju R

    2005-01-01

    .... An asymptotic expansion homogenization (AEH)-based microstructural model available for modeling microstructural aspects of modern armor materials is coupled with PARADYN, a parallel explicit Lagrangian finite-element code...

  7. Corporate Social Responsibility Agreements Model for Community ...

    Michael

    2016-06-01

    Jun 1, 2016 ... aspect of Corporate Social Responsibility (CSR), to the extent that often .... intentions and implemented some community development projects, the .... Environmental Protection Agency, Police and civil society to solicit their ...

  8. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  9. Development of CFD fire models for deterministic analyses of the cable issues in the nuclear power plant

    Lin, C.-H.; Ferng, Y.-M.; Pei, B.-S.

    2009-01-01

    Additional fire barriers of electrical cables are required for the nuclear power plants (NPPs) in Taiwan due to the separation requirements of Appendix R to 10 CFR Part 50. The risk-informed fire analysis (RIFA) may provide a viable method to resolve these fire barrier issues. However, it is necessary to perform the fire scenario analyses so that RIFA can quantitatively determine the risk related to the fire barrier wrap. The CFD fire models are then proposed in this paper to help the RIFA in resolving these issues. Three typical fire scenarios are selected to assess the present CFD models. Compared with the experimental data and other model's simulations, the present calculated results show reasonable agreements, rendering that present CFD fire models can provide the quantitative information for RIFA analyses to release the cable wrap requirements for NPPs

  10. The application of model with lumped parameters for transient condition analyses of NPP

    Stankovic, B.; Stevanovic, V.

    1985-01-01

    The transient behaviour of NPP Krsko during the accident of pressurizer spray valve stuck open has been simulated y lumped parameters model of the PWR coolant system components, developed at the faculty of Mechanical Engineering, University of Belgrade. The elementary volumes which are characterised by the process and state parameters, and by junctions which are characterised by the geometrical and flow parameters are basic structure of physical model. The process parameters obtained by the model RESI, show qualitative agreement with the measured valves, in a degree in which the actions of reactor safety engineered system and emergency core cooling system are adequately modelled; in spite of the elementary physical model structure and only the modelling of thermal process in reactor core and equilibrium conditions of pressurizer and steam generator. The pressurizer pressure and liquid level predicted by the non-equilibrium pressurizer model SOP show good agreement until the HIPS (high pressure pumps) is activated. (author)

  11. Conducting requirements analyses for research using routinely collected health data: a model driven approach.

    de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul

    2012-01-01

    Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.

  12. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  13. Tectonic and erosion-driven uplift for the Gamburtsev Mountains: a preliminary combined landscape analyses and flexural modelling approach

    Ferraccioli, Fausto; Anderson, Lester; Jamieson, Stewart; Bell, Robin; Rose, Kathryn; Jordan, Tom; Finn, Carol; Damaske, Detlef

    2013-04-01

    whether the modern Gamburtsevs may have been uplifted solely in response to significant changes in Cenozoic erosion patterns during the early stages of East Antarctic ice sheet formation that were superimposed upon an old remnant Permian-age rift flank. To address this question we combine results from: i) analyses of the subglacial landscape of the GSM that includes valley network, hyposometry and geomorphic studies of the fluvial and glacial features identified within the range (Rose et al., 2013 EPSL in review) with; ii) preliminary flexural models of peak uplift caused by the isostatic responses to fluvial and glacial valley incision processes, both within the range and in the adjacent Lambert Glacier region. We also include in our geophysical relief and isostatic model calculations considerations on the major change in erosion rates since Oligocene times and the total amount of incision estimated for the Lambert Glacier system using the values proposed by Tochlin et al. (2012). Our models yield new estimates of peak uplift and regional lowering for continuous and broken-plate approximations that can also be used to assess the range of pre-incision elevation of the "Gamburtsev plateau". Our modelling outputs were also calibrated against the present-day elevations of up to 1500 m a.s.l of uplifted Oligocene-early Miocene glacial-marine sediments in the Lambert Glacier (Hambrey et al., 2000, Geology).

  14. Economical analyses of build-operate-transfer model in establishing alternative power plants

    Yumurtaci, Zehra [Yildiz Technical University, Department of Mechanical Engineering, Y.T.U. Mak. Fak. Mak. Muh. Bolumu, Besiktas, 34349 Istanbul (Turkey)]. E-mail: zyumur@yildiz.edu.tr; Erdem, Hasan Hueseyin [Yildiz Technical University, Department of Mechanical Engineering, Y.T.U. Mak. Fak. Mak. Muh. Bolumu, Besiktas, 34349 Istanbul (Turkey)

    2007-01-15

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model.

  15. Economical analyses of build-operate-transfer model in establishing alternative power plants

    Yumurtaci, Zehra; Erdem, Hasan Hueseyin

    2007-01-01

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model

  16. RETRAN nonequilibrium two-phase flow model for operational transient analyses

    Paulsen, M.P.; Hughes, E.D.

    1982-01-01

    The field balance equations, flow-field models, and equation of state for a nonequilibrium two-phase flow model for RETRAN are given. The differential field balance model equations are: (1) conservation of mixture mass; (2) conservation of vapor mass; (3) balance of mixture momentum; (4) a dynamic-slip model for the velocity difference; and (5) conservation of mixture energy. The equation of state is formulated such that the liquid phase may be subcooled, saturated, or superheated. The vapor phase is constrained to be at the saturation state. The dynamic-slip model includes wall-to-phase and interphase momentum exchanges. A mechanistic vapor generation model is used to describe vapor production under bulk subcooling conditions. The speed of sound for the mixture under nonequilibrium conditions is obtained from the equation of state formulation. The steady-state and transient solution methods are described

  17. The modeling of response indicators of integrated water resources ...

    models were used to model and predict the relationship between water resources mobilization WRM and response variables in the ... to the fast growing demand of urban and rural populations ... Meteorological Organization (WMO). They fall.

  18. Response of subassembly model with internals

    Kennedy, J.M.; Belytschko, T.

    1977-01-01

    In safety analysis at the subassembly level, the following aspects of subassembly response are of concern: (1) the structural integrity of the subassembly within which the accident occurs: (2) the structural integrity of adjacent subassemblies, particularly the maintenance of sufficient cross sectional area for flow of the coolant: and (3) prevention of damage to fuel pins in the adjacent subassembly, for this could lead to additional energy release and thus the propagation of the accident. For the purpose of predicting the structural response in such accident environments, a program STRAW has been developed. This is a finite element program which can treat the structure-fluid system consisting of the coolant and the subassembly walls. Both material nonlinearities due to elastic-plastic response and geometric nonlinearities due to large displacements can be treated. The energy source can be represented either by a pressure-time history or an equation of state. (Auth.)

  19. Microarray and growth analyses identify differences and similarities of early corn response to weeds, shade, and nitrogen stress

    Weed interference with crop growth is often attributed to water, nutrient, or light competition; however, specific physiological responses to these stresses are not well described. This study’s objective was to compare growth, yield, and gene expression responses of corn to nitrogen (N), low light (...

  20. Complementary modelling approaches for analysing several effects of privatization on electricity investment

    Bunn, D.W.; Vlahos, K. [London Business School (United Kingdom); Larsen, E.R. [Bologna Univ. (Italy)

    1997-11-01

    This chapter examines two modelling approaches optimisation and system dynamics, for describing the effects of the privatisation of the UK electric supply industry. Modelling the transfer of ownership effects is discussed and the implications of the rate of return, tax and debt are considered. The modelling of the competitive effects is addressed, and the effects of market structure, risk and uncertainty, and strategic competition are explored in detail. (UK)

  1. Uncertainty analyses of the calibrated parameter values of a water quality model

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  2. Multi-wheat-model ensemble responses to interannual climatic variability

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

  3. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  4. A note on monotonicity of item response functions for ordered polytomous item response theory models.

    Kang, Hyeon-Ah; Su, Ya-Hui; Chang, Hua-Hua

    2018-03-08

    A monotone relationship between a true score (τ) and a latent trait level (θ) has been a key assumption for many psychometric applications. The monotonicity property in dichotomous response models is evident as a result of a transformation via a test characteristic curve. Monotonicity in polytomous models, in contrast, is not immediately obvious because item response functions are determined by a set of response category curves, which are conceivably non-monotonic in θ. The purpose of the present note is to demonstrate strict monotonicity in ordered polytomous item response models. Five models that are widely used in operational assessments are considered for proof: the generalized partial credit model (Muraki, 1992, Applied Psychological Measurement, 16, 159), the nominal model (Bock, 1972, Psychometrika, 37, 29), the partial credit model (Masters, 1982, Psychometrika, 47, 147), the rating scale model (Andrich, 1978, Psychometrika, 43, 561), and the graded response model (Samejima, 1972, A general model for free-response data (Psychometric Monograph no. 18). Psychometric Society, Richmond). The study asserts that the item response functions in these models strictly increase in θ and thus there exists strict monotonicity between τ and θ under certain specified conditions. This conclusion validates the practice of customarily using τ in place of θ in applied settings and provides theoretical grounds for one-to-one transformations between the two scales. © 2018 The British Psychological Society.

  5. Solving scheduling problems by untimed model checking. The clinical chemical analyser case study

    Margaria, T.; Wijs, Anton J.; Massink, M.; van de Pol, Jan Cornelis; Bortnik, Elena M.

    2009-01-01

    In this article, we show how scheduling problems can be modelled in untimed process algebra, by using special tick actions. A minimal-cost trace leading to a particular action, is one that minimises the number of tick steps. As a result, we can use any (timed or untimed) model checking tool to find

  6. A dynamic bivariate Poisson model for analysing and forecasting match results in the English Premier League

    Koopman, S.J.; Lit, R.

    2015-01-01

    Summary: We develop a statistical model for the analysis and forecasting of football match results which assumes a bivariate Poisson distribution with intensity coefficients that change stochastically over time. The dynamic model is a novelty in the statistical time series analysis of match results

  7. Integrated freight network model : a GIS-based platform for transportation analyses.

    2015-01-01

    The models currently used to examine the behavior transportation systems are usually mode-specific. That is, they focus on a single mode (i.e. railways, highways, or waterways). The lack of : integration limits the usefulness of models to analyze the...

  8. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  9. Transport of nutrients from land to sea: Global modeling approaches and uncertainty analyses

    Beusen, A.H.W.

    2014-01-01

    This thesis presents four examples of global models developed as part of the Integrated Model to Assess the Global Environment (IMAGE). They describe different components of global biogeochemical cycles of the nutrients nitrogen (N), phosphorus (P) and silicon (Si), with a focus on approaches to

  10. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  11. Analysing surface runoff and erosion responses to different land uses from the NE of Iberian Peninsula through rainfall simulation

    Regüés, David; Arnáez, José; Badía, David; Cerdà, Artemi; Echeverría, María Teresa; Gispert, María; Lana-Renault, Noemí; Lasanta, Teodoro; León, Javier; Nadal-Romero, Estela; Pardini, Giovanni

    2014-05-01

    Rainfall simulation experiments are being used by soil scientists, geomorphologists, and hydrologist to study runoff generation and erosion processes. The use of different apparatus with different rainfall intensities and size of the wetted area contribute to determine the most vulnerable soils and land uses (Cerdá, 1998; Cerdà et al., 2009; Nadal-Romero et al., 2011; Martínez-Murillo et al., 2013; León et al., 2014). This research aims to determine the land uses that yield more sediments and water and to know the factors that control the differences. The information from 152 experiments of rainfall simulation was jointly analysed. Experiments were done in 17 land uses (natural forest, tree plantation, burned forest, scrub, meadows, crops and badlands), with contrasted exposition (north-south), and vegetation cover variety and/or density. These situations were selected from four geographic contexts (NE of Catalonia, high and medium lands from the Ebro valley and Southern range of central Pyrenees) with significant altitude variations, between 90 and 1000 meters above sea level, which represent the heterogeneity of the Mediterranean climate. The use of similar rainfall simulation apparatus, with the same spray nozzle, spraying components and plot size, favours the comparison of the results. A wide spectrum of precipitation intensities was applied, in order to reach surface runoff generation in all cases. Results showed significant differences in runoff amounts and erosion rates, which were mainly associated with land uses, even more than precipitation differences. Runoff coefficient shows an inversed exponential relationship with rainfall intensity, which is the opposite what could be previously expected (Ziadat and Taimeh, 2013). This may be only justified by land use characteristics because a direct effect between runoff generation intensity and soil degradation conditions, with respect vegetation covers features and density, was observed. In fact, even though

  12. Hierarchical Bayes Models for Response Time Data

    Craigmile, Peter F.; Peruggia, Mario; Van Zandt, Trisha

    2010-01-01

    Human response time (RT) data are widely used in experimental psychology to evaluate theories of mental processing. Typically, the data constitute the times taken by a subject to react to a succession of stimuli under varying experimental conditions. Because of the sequential nature of the experiments there are trends (due to learning, fatigue,…

  13. Modelling Flexible Pavement Response and Performance

    Ullidtz, Per

    This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods.......This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods....

  14. Classification of scalar and dyadic nonlocal optical response models

    Wubs, Martijn

    2015-01-01

    Nonlocal optical response is one of the emerging effects on the nanoscale for particles made of metals or doped semiconductors. Here we classify and compare both scalar and tensorial nonlocal response models. In the latter case the nonlocality can stem from either the longitudinal response...

  15. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  16. Development and preliminary analyses of material balance evaluation model in nuclear fuel cycle

    Matsumura, Tetsuo

    1994-01-01

    Material balance evaluation model in nuclear fuel cycle has been developed using ORIGEN-2 code as basic engine. This model has feature of: It can treat more than 1000 nuclides including minor actinides and fission products. It has flexibility of modeling and graph output using a engineering work station. I made preliminary calculation of LWR fuel high burnup effect (reloading fuel average burnup of 60 GWd/t) on nuclear fuel cycle. The preliminary calculation shows LWR fuel high burnup has much effect on Japanese Pu balance problem. (author)

  17. Analysing the distribution of synaptic vesicles using a spatial point process model

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  18. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    The purpose of this study was to create an empirical model for assessing the landslide risk potential at Savadkouh Azad University, which is located in the rural surroundings of Savadkouh, about 5 km from the city of Pol-Sefid in northern Iran. The soil longitudinal profile of the city of Babol......, located 25 km from the Caspian Sea, also was predicted with an artificial neural network (ANN). A multilayer perceptron neural network model was applied to the landslide area and was used to analyze specific elements in the study area that contributed to previous landsliding events. The ANN models were...... studies in landslide susceptibility zonation....

  19. Modeling adaptive and non-adaptive responses to environmental change

    Coulson, Tim; Kendall, Bruce E; Barthold, Julia A.

    2017-01-01

    , with plastic responses being either adaptive or non-adaptive. We develop an approach that links quantitative genetic theory with data-driven structured models to allow prediction of population responses to environmental change via plasticity and adaptive evolution. After introducing general new theory, we...... construct a number of example models to demonstrate that evolutionary responses to environmental change over the short-term will be considerably slower than plastic responses, and that the rate of adaptive evolution to a new environment depends upon whether plastic responses are adaptive or non-adaptive....... Parameterization of the models we develop requires information on genetic and phenotypic variation and demography that will not always be available, meaning that simpler models will often be required to predict responses to environmental change. We consequently develop a method to examine whether the full...

  20. Investigating the LGBTQ Responsive Model for Supervision of Group Work

    Luke, Melissa; Goodrich, Kristopher M.

    2013-01-01

    This article reports an investigation of the LGBTQ Responsive Model for Supervision of Group Work, a trans-theoretical supervisory framework to address the needs of lesbian, gay, bisexual, transgender, and questioning (LGBTQ) persons (Goodrich & Luke, 2011). Findings partially supported applicability of the LGBTQ Responsive Model for Supervision…

  1. Projective Item Response Model for Test-Independent Measurement

    Ip, Edward Hak-Sing; Chen, Shyh-Huei

    2012-01-01

    The problem of fitting unidimensional item-response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that contains a major dimension of interest but that may also contain minor nuisance dimensions. Because fitting a unidimensional model to multidimensional data results in…

  2. A Box-Cox normal model for response times

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box–Cox transformations for response

  3. The Response of Performance to Merger Strategy in Indonesian Banking Industry: Analyses on Bank Mandiri, Bank Danamon, and Bank Permata

    Murti Lestari

    2010-05-01

    Full Text Available This study analyzes the responses of performances of BankMandiri, Bank Danamon, and Bank Permata to merger strategy.This paper harnesses the quantitative approach with structuralbreak analysis method and impulse response function. Theplausible findings indicate that the merger of Bank Permataproduces a better performance response in comparison to theconsolidation of Bank Mandiri and the merger of Bank Danamon.The merger of Bank Permata does not result in performanceshocks, and the structural break does not prevail either. On theother hand, the consolidation of Bank Mandiri and the mergerof Bank Danamon result in structural breaks, particularly in thespread performance. In order to return to the stable position, themergers of Bank Mandiri and Bank Danamon require a longertime than does the merger of Bank Permata. This researchindicates that for large banks, the mergers and acquisitions(retaining one existing bank will deliver a better performanceresponse than will the consolidations (no existing bank. Keywords: impulse response function; merger; structural break

  4. Effects of foundation modeling on dynamic response of a soil- structure system

    Chen, J.C.; Tabatabaie, M.

    1996-07-01

    This paper presents the results of our investigation to evaluate the effectiveness of different foundation modeling techniques used in soil-structure interaction analyses. The study involved analysis of three different modeling techniques applied to two different foundation configurations (one with a circular and one with a square shape). The results of dynamic response of a typical nuclear power plant structure supported on such foundations are presented

  5. On the spatio-temporal and energy-dependent response of riometer absorption to electron precipitation: drift-time and conjunction analyses in realistic electric and magnetic fields

    Kellerman, Adam; Shprits, Yuri; Makarevich, Roman; Donovan, Eric; Zhu, Hui

    2017-04-01

    Riometers are low-cost passive radiowave instruments located in both northern and southern hemispheres that capable of operating during quiet and disturbed conditions. Many instruments have been operating continuously for multiple solar cycles, making them a useful tool for long-term statistical studies and for real-time analysis and forecasting of space weather. Here we present recent and new analyses of the relationship between the riometer-measured cosmic noise absorption and electron precipitation into the D-region and lower E-region ionosphere. We utilize two techniques: a drift-time analysis in realistic electric and magnetic field models, where a particle is traced from one location to another, and the energy determined by the time delay between similar observations; and a conjunction analysis, where we directly compare precipitated fluxes from THEMIS and Van Allen Probes with the riometer absorption. In both cases we present a statistical analysis of the response of riometer absorption to electron precipitation as a function of MLAT, MLT, and geomagnetic conditions.

  6. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  7. Modeling of Control Costs, Emissions, and Control Retrofits for Cost Effectiveness and Feasibility Analyses

    Learn about EPA’s use of the Integrated Planning Model (IPM) to develop estimates of SO2 and NOx emission control costs, projections of futureemissions, and projections of capacity of future control retrofits, assuming controls on EGUs.

  8. The PIN gene family in cotton (Gossypium hirsutum): genome-wide identification and gene expression analyses during root development and abiotic stress responses.

    He, Peng; Zhao, Peng; Wang, Limin; Zhang, Yuzhou; Wang, Xiaosi; Xiao, Hui; Yu, Jianing; Xiao, Guanghui

    2017-07-03

    Cell elongation and expansion are significant contributors to plant growth and morphogenesis, and are often regulated by environmental cues and endogenous hormones. Auxin is one of the most important phytohormones involved in the regulation of plant growth and development and plays key roles in plant cell expansion and elongation. Cotton fiber cells are a model system for studying cell elongation due to their large size. Cotton is also the world's most utilized crop for the production of natural fibers for textile and garment industries, and targeted expression of the IAA biosynthetic gene iaaM increased cotton fiber initiation. Polar auxin transport, mediated by PIN and AUX/LAX proteins, plays a central role in the control of auxin distribution. However, very limited information about PIN-FORMED (PIN) efflux carriers in cotton is known. In this study, 17 PIN-FORMED (PIN) efflux carrier family members were identified in the Gossypium hirsutum (G. hirsutum) genome. We found that PIN1-3 and PIN2 genes originated from the At subgenome were highly expressed in roots. Additionally, evaluation of gene expression patterns indicated that PIN genes are differentially induced by various abiotic stresses. Furthermore, we found that the majority of cotton PIN genes contained auxin (AuxREs) and salicylic acid (SA) responsive elements in their promoter regions were significantly up-regulated by exogenous hormone treatment. Our results provide a comprehensive analysis of the PIN gene family in G. hirsutum, including phylogenetic relationships, chromosomal locations, and gene expression and gene duplication analyses. This study sheds light on the precise roles of PIN genes in cotton root development and in adaption to stress responses.

  9. Assessment applicability of selected models of multiple discriminant analyses to forecast financial situation of Polish wood sector enterprises

    Adamowicz Krzysztof

    2017-03-01

    Full Text Available In the last three decades forecasting bankruptcy of enterprises has been an important and difficult problem, used as an impulse for many research projects (Ribeiro et al. 2012. At present many methods of bankruptcy prediction are available. In view of the specific character of economic activity in individual sectors, specialised methods adapted to a given branch of industry are being used increasingly often. For this reason an important scientific problem is related with the indication of an appropriate model or group of models to prepare forecasts for a given branch of industry. Thus research has been conducted to select an appropriate model of Multiple Discriminant Analysis (MDA, best adapted to forecasting changes in the wood industry. This study analyses 10 prediction models popular in Poland. Effectiveness of the model proposed by Jagiełło, developed for all industrial enterprises, may be labelled accidental. That model is not adapted to predict financial changes in wood sector companies in Poland.

  10. Numerical tools for musical instruments acoustics: analysing nonlinear physical models using continuation of periodic solutions

    Karkar , Sami; Vergez , Christophe; Cochelin , Bruno

    2012-01-01

    International audience; We propose a new approach based on numerical continuation and bifurcation analysis for the study of physical models of instruments that produce self- sustained oscillation. Numerical continuation consists in following how a given solution of a set of equations is modified when one (or several) parameter of these equations are allowed to vary. Several physical models (clarinet, saxophone, and violin) are formulated as nonlinear dynamical systems, whose periodic solution...

  11. Aquatic emergency response model at the Savannah River Plant

    Hayes, D.W.

    1987-01-01

    The Savannah River Plant emergency response plans include a stream/river emergency response model to predict travel times, maximum concentrations, and concentration distributions as a function of time at selected downstream/river locations from each of the major SRP installations. The menu driven model can be operated from any of the terminals that are linked to the real-time computer monitoring system for emergency response

  12. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  14. Computer Modeling of Thoracic Response to Blast

    1988-01-01

    be solved at reasonable cost. intrathoracic pressure responses for subjects wearing In order to determine if the gas content of the sheep ballistic...spatial and temporal ries were compared with data. Two extreme cases had distribution of the load can be reasonably predicted by the rumen filled with...to the ap- is that sheep have large, multiple stomachs that have a proximate location where intrathoracic pressure meas- considerable air content . It

  15. Modelling household responses to energy efficiency interventions ...

    2010-11-01

    Nov 1, 2010 ... to interventions aimed at reducing energy consumption (specifically the use of .... 4 A system dynamics model of electricity consumption ...... to base comparisons on overly detailed quantitative predictions of behaviour.

  16. Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency

    Su, Shiyang

    2017-01-01

    With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…

  17. Plasma equilibrium response modelling and validation on JT-60U

    Lister, J.B.; Sharma, A.; Limebeer, D.J.N.; Wainwright, J.P.; Nakamura, Y.; Yoshino, R.

    2002-01-01

    A systematic procedure to identify the plasma equilibrium response to the poloidal field coil voltages has been applied to the JT-60U tokamak. The required response was predicted with a high accuracy by a state-space model derived from first principles. The ab initio derivation of linearized plasma equilibrium response models is re-examined using an approach standard in analytical mechanics. A symmetric formulation is naturally obtained, removing a previous weakness in such models. RZIP, a rigid current distribution model, is re-derived using this approach and is compared with the new experimental plasma equilibrium response data obtained from Ohmic and neutral beam injection discharges in the JT-60U tokamak. In order to remove any bias from the comparison between modelled and measured plasma responses, the electromagnetic response model without plasma was first carefully tuned against experimental data, using a parametric approach, for which different cost functions for quantifying model agreement were explored. This approach additionally provides new indications of the accuracy to which various plasma parameters are known, and to the ordering of physical effects. Having taken these precautions when tuning the plasmaless model, an empirical estimate of the plasma self-inductance, the plasma resistance and its radial derivative could be established and compared with initial assumptions. Off-line tuning of the JT-60U controller is presented as an example of the improvements which might be obtained by using such a model of the plasma equilibrium response. (author)

  18. Oil spill models for emergency response

    Hodgins, D.O.

    1997-01-01

    The need for, and the nature of an oil spill model, were discussed. Modern oil spill models were shown to provide rapid and accurate input of information about a marine spill, as well as to provide powerful visualization methods for displaying output data. Marine oil spill models are designed to answer five questions: (1) where will the oil go in 2, 3, 6, 12, and 24 hours, (2) how fast will it move, (3) how big will the slick get, (4) how much will end up on shore and where, and (5) how do the oil properties change. The models are able to provide timely and accurate results by using reasonably complete algorithms for the physics and chemistry governing oil slick evolution that take advantage of computer visualization methods for displaying output data. These models have been made possible through new technologies which have increased access to environmental data on winds, currents and satellite imaging of slicks. Spill modelling is also evolving by taking advantage of the Internet for both acquisition of input data and dissemination of results. 5 figs

  19. Explanatory models for ecological response surfaces

    Jager, H.I.; Overton, W.S.

    1991-01-01

    Understanding the spatial organization of ecological systems is a fundamental part of ecosystem study. While discovering the causal relationships of this organization is an important goal, our purpose of spatial description on a regional scale is best met by use of explanatory variables that are somewhat removed from the mechanistic causal level. Regional level understanding is best obtained from explanatory variables that reflect spatial gradients at the regional scale and from categorical variables that describe the discrete constituents of (statistical) populations, such as lakes. In this paper, we use a regression model to predict lake acid neutralizing capacity (ANC) based on environmental predictor variables over a large region. These predictions are used to produce model-based population estimates. Two key features of our modeling approach are that is honors the spatial context and the design of the sample data. The spatial context of the data are brought into the analysis of model residuals through the interpretation of residual maps and semivariograms. The sampling design is taken into account by including stratification variables from the design in the model. This ensures that the model applies to a real population of lakes (the target population), rather than whatever hypothetical population the sample is a random sample of

  20. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects

  1. An Immune-inspired Adaptive Automated Intrusion Response System Model

    Ling-xi Peng

    2012-09-01

    Full Text Available An immune-inspired adaptive automated intrusion response system model, named as , is proposed. The descriptions of self, non-self, immunocyte, memory detector, mature detector and immature detector of the network transactions, and the realtime network danger evaluation equations are given. Then, the automated response polices are adaptively performed or adjusted according to the realtime network danger. Thus, not only accurately evaluates the network attacks, but also greatly reduces the response times and response costs.

  2. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    Megan A Cummins

    2014-03-01

    Full Text Available Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP. The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz and fast (2 Hz rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of

  3. Thermo-mechanical analyses and model validation in the HAW test field. Final report

    Heijdra, J J; Broerse, J; Prij, J

    1995-01-01

    An overview is given of the thermo-mechanical analysis work done for the design of the High Active Waste experiment and for the purpose of validation of the used models through comparison with experiments. A brief treatise is given on the problems of validation of models used for the prediction of physical behaviour which cannot be determined with experiments. The analysis work encompasses investigations into the initial state of stress in the field, the constitutive relations, the temperature rise, and the pressure on the liner tubes inserted in the field to guarantee the retrievability of the radioactive sources used for the experiment. The measurements of temperatures, deformations, and stresses are described and an evaluation is given of the comparison of measured and calculated data. An attempt has been made to qualify or even quantify the discrepancies, if any, between measurements and calculations. It was found that the model for the temperature calculations performed adequately. For the stresses the general tendency was good, however, large discrepancies exist mainly due to inaccuracies in the measurements. For the deformations again the general tendency of the model predictions was in accordance with the measurements. However, from the evaluation it appears that in spite of the efforts to estimate the correct initial rock pressure at the location of the experiment, this pressure has been underestimated. The evaluation has contributed to a considerable increase in confidence in the models and gives no reason to question the constitutive model for rock salt. However, due to the quality of the measurements of the stress and the relatively short period of the experiments no quantitatively firm support for the constitutive model is acquired. Collections of graphs giving the measured and calculated data are attached as appendices. (orig.).

  4. Thermo-mechanical analyses and model validation in the HAW test field. Final report

    Heijdra, J.J.; Broerse, J.; Prij, J.

    1995-01-01

    An overview is given of the thermo-mechanical analysis work done for the design of the High Active Waste experiment and for the purpose of validation of the used models through comparison with experiments. A brief treatise is given on the problems of validation of models used for the prediction of physical behaviour which cannot be determined with experiments. The analysis work encompasses investigations into the initial state of stress in the field, the constitutive relations, the temperature rise, and the pressure on the liner tubes inserted in the field to guarantee the retrievability of the radioactive sources used for the experiment. The measurements of temperatures, deformations, and stresses are described and an evaluation is given of the comparison of measured and calculated data. An attempt has been made to qualify or even quantify the discrepancies, if any, between measurements and calculations. It was found that the model for the temperature calculations performed adequately. For the stresses the general tendency was good, however, large discrepancies exist mainly due to inaccuracies in the measurements. For the deformations again the general tendency of the model predictions was in accordance with the measurements. However, from the evaluation it appears that in spite of the efforts to estimate the correct initial rock pressure at the location of the experiment, this pressure has been underestimated. The evaluation has contributed to a considerable increase in confidence in the models and gives no reason to question the constitutive model for rock salt. However, due to the quality of the measurements of the stress and the relatively short period of the experiments no quantitatively firm support for the constitutive model is acquired. Collections of graphs giving the measured and calculated data are attached as appendices. (orig.)

  5. Classifying Multi-Model Wheat Yield Impact Response Surfaces Showing Sensitivity to Temperature and Precipitation Change

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; hide

    2017-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in temperature (minus 2 to plus 9 degrees Centigrade) and precipitation (minus 50 to plus 50 percent). Model results were analysed by plotting them as impact response surfaces (IRSs), classifying the IRS patterns of individual model simulations, describing these classes and analysing factors that may explain the major differences in model responses. The model ensemble was used to simulate yields of winter and spring wheat at four sites in Finland, Germany and Spain. Results were plotted as IRSs that show changes in yields relative to the baseline with respect to temperature and precipitation. IRSs of 30-year means and selected extreme years were classified using two approaches describing their pattern. The expert diagnostic approach (EDA) combines two aspects of IRS patterns: location of the maximum yield (nine classes) and strength of the yield response with respect to climate (four classes), resulting in a total of 36 combined classes defined using criteria pre-specified by experts. The statistical diagnostic approach (SDA) groups IRSs by comparing their pattern and magnitude, without attempting to interpret these features. It applies a hierarchical clustering method, grouping response patterns using a distance metric that combines the spatial correlation and Euclidian distance between IRS pairs. The two approaches were used to investigate whether different patterns of yield response could be related to different properties of the crop models, specifically their genealogy, calibration and process description. Although no single model property across a large model ensemble was found to explain the integrated yield response to temperature and precipitation perturbations, the

  6. Comprehensive transcriptome analyses correlated with untargeted metabolome reveal differentially expressed pathways in response to cell wall alterations.

    Reem, Nathan T; Chen, Han-Yi; Hur, Manhoi; Zhao, Xuefeng; Wurtele, Eve Syrkin; Li, Xu; Li, Ling; Zabotina, Olga

    2018-03-01

    This research provides new insights into plant response to cell wall perturbations through correlation of transcriptome and metabolome datasets obtained from transgenic plants expressing cell wall-modifying enzymes. Plants respond to changes in their cell walls in order to protect themselves from pathogens and other stresses. Cell wall modifications in Arabidopsis thaliana have profound effects on gene expression and defense response, but the cell signaling mechanisms underlying these responses are not well understood. Three transgenic Arabidopsis lines, two with reduced cell wall acetylation (AnAXE and AnRAE) and one with reduced feruloylation (AnFAE), were used in this study to investigate the plant responses to cell wall modifications. RNA-Seq in combination with untargeted metabolome was employed to assess differential gene expression and metabolite abundance. RNA-Seq results were correlated with metabolite abundances to determine the pathways involved in response to cell wall modifications introduced in each line. The resulting pathway enrichments revealed the deacetylation events in AnAXE and AnRAE plants induced similar responses, notably, upregulation of aromatic amino acid biosynthesis and changes in regulation of primary metabolic pathways that supply substrates to specialized metabolism, particularly those related to defense responses. In contrast, genes and metabolites of lipid biosynthetic pathways and peroxidases involved in lignin polymerization were downregulated in AnFAE plants. These results elucidate how primary metabolism responds to extracellular stimuli. Combining the transcriptomics and metabolomics datasets increased the power of pathway prediction, and demonstrated the complexity of pathways involved in cell wall-mediated signaling.

  7. Analysing bifurcations encountered in numerical modelling of current transfer to cathodes of dc glow and arc discharges

    Almeida, P G C; Benilov, M S; Cunha, M D; Faria, M J

    2009-01-01

    Bifurcations and/or their consequences are frequently encountered in numerical modelling of current transfer to cathodes of gas discharges, also in apparently simple situations, and a failure to recognize and properly analyse a bifurcation may create difficulties in the modelling and hinder the understanding of numerical results and the underlying physics. This work is concerned with analysis of bifurcations that have been encountered in the modelling of steady-state current transfer to cathodes of glow and arc discharges. All basic types of steady-state bifurcations (fold, transcritical, pitchfork) have been identified and analysed. The analysis provides explanations to many results obtained in numerical modelling. In particular, it is shown that dramatic changes in patterns of current transfer to cathodes of both glow and arc discharges, described by numerical modelling, occur through perturbed transcritical bifurcations of first- and second-order contact. The analysis elucidates the reason why the mode of glow discharge associated with the falling section of the current-voltage characteristic in the solution of von Engel and Steenbeck seems not to appear in 2D numerical modelling and the subnormal and normal modes appear instead. A similar effect has been identified in numerical modelling of arc cathodes and explained.

  8. Modeling of in-vessel fission product release including fuel morphology effects for severe accident analyses

    Suh, K.Y.

    1989-10-01

    A new in-vessel fission product release model has been developed and implemented to perform best-estimate calculations of realistic source terms including fuel morphology effects. The proposed bulk mass transfer correlation determines the product of fission product release and equiaxed grain size as a function of the inverse fuel temperature. The model accounts for the fuel-cladding interaction over the temperature range between 770 K and 3000 K in the steam environment. A separate driver has been developed for the in-vessel thermal hydraulic and fission product behavior models that were developed by the Department of Energy for the Modular Accident Analysis Package (MAAP). Calculational results of these models have been compared to the results of the Power Burst Facility Severe Fuel Damage tests. The code predictions utilizing the mass transfer correlation agreed with the experimentally determined fractional release rates during the course of the heatup, power hold, and cooldown phases of the high temperature transients. Compared to such conventional literature correlations as the steam oxidation model and the NUREG-0956 correlation, the mass transfer correlation resulted in lower and less rapid releases in closer agreement with the on-line and grab sample data from the Severe Fuel Damage tests. The proposed mass transfer correlation can be applied for best-estimate calculations of fission products release from the UO 2 fuel in both nominal and severe accident conditions. 15 refs., 10 figs., 2 tabs

  9. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  10. Analysing local-level responses to migration and urban health in Hillbrow: the Johannesburg Migrant Health Forum

    Jo Vearey

    2017-07-01

    Full Text Available Abstract Johannesburg is home to a diverse migrant population and a range of urban health challenges. Locally informed and implemented responses to migration and health that are sensitive to the particular needs of diverse migrant groups are urgently required. In the absence of a coordinated response to migration and health in the city, the Johannesburg Migrant Health Forum (MHF – an unfunded informal working group of civil society actors – was established in 2008. We assess the impact, contributions and challenges of the MHF on the development of local-level responses to migration and urban health in Johannesburg to date. In this Commentary, we draw on data from participant observation in MHF meetings and activities, a review of core MHF documents, and semi-structured interviews conducted with 15 MHF members. The MHF is contributing to the development of local-level migration and health responses in Johannesburg in three key ways: (1 tracking poor quality or denial of public services to migrants; (2 diverse organisational membership linking the policy process with community experiences; and (3 improving service delivery to migrant clients through participation of diverse service providers and civil society organisations in the Forum. Our findings indicate that the MHF has a vital role to play in supporting the development of appropriate local responses to migration and health in a context of continued – and increasing – migration, and against the backdrop of rising anti-immigrant sentiments.

  11. [Unfolding item response model using best-worst scaling].

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  12. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely available...... for research purposes at http://www.bdporc.irta.es/estudis.jsp. The main feature of the program is to compute Monte Carlo estimates of marginal posterior distributions of parameters of interest. The program is quite flexible, allowing the user to fit a variety of linear models at the level of the mean...

  13. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    Repeated long-term censuses have revealed large-scale spatial patterns in Amazon basin forest structure and dynamism, with some forests in the west of the basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth, designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR), has been developed. The model allows for within-stand variations in tree size distribution and key functional traits and between-stand differences in climate and soil physical and chemical properties. It runs at the stand level with four functional traits - leaf dry mass per area (Ma), leaf nitrogen (NL) and phosphorus (PL) content and wood density (DW) varying from tree to tree - in a way that replicates the observed continua found within each stand. We first applied the model to validate canopy-level water fluxes at three eddy covariance flux measurement sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for larger trees. At the stand level, simulations at 40 plots were used to explore the influence of climate and soil nutrient availability on the gross (ΠG) and net (ΠN) primary production rates as well as the carbon use efficiency (CU). Simulated ΠG, ΠN and CU were not associated with temperature. On the other hand, all three measures of stand level productivity were positively related to both mean annual precipitation and soil nutrient status

  14. Influence of the Human Skin Tumor Type in Photodynamic Therapy Analysed by a Predictive Model

    I. Salas-García

    2012-01-01

    Full Text Available Photodynamic Therapy (PDT modeling allows the prediction of the treatment results depending on the lesion properties, the photosensitizer distribution, or the optical source characteristics. We employ a predictive PDT model and apply it to different skin tumors. It takes into account optical radiation distribution, a nonhomogeneous topical photosensitizer spatial temporal distribution, and the time-dependent photochemical interaction. The predicted singlet oxygen molecular concentrations with varying optical irradiance are compared and could be directly related with the necrosis area. The results show a strong dependence on the particular lesion. This suggests the need to design optimal PDT treatment protocols adapted to the specific patient and lesion.

  15. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Sung-Chien Lin

    2014-01-01

    In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results ...

  16. Studies of the Earth Energy Budget and Water Cycle Using Satellite Observations and Model Analyses

    Campbell, G. G.; VonderHarr, T. H.; Randel, D. L.; Kidder, S. Q.

    1997-01-01

    During this research period we have utilized the ERBE data set in comparisons to surface properties and water vapor observations in the atmosphere. A relationship between cloudiness and surface temperature anomalies was found. This same relationship was found in a general circulation model, verifying the model. The attempt to construct a homogeneous time series from Nimbus 6, Nimbus 7 and ERBE data is not complete because we are still waiting for the ERBE reanalysis to be completed. It will be difficult to merge the Nimbus 6 data in because its observations occurred when the average weather was different than the other periods, so regression adjustments are not effective.

  17. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  18. Empirical analyses of a choice model that captures ordering among attribute values

    Mabit, Stefan Lindhard

    2017-01-01

    an alternative additionally because it has the highest price. In this paper, we specify a discrete choice model that takes into account the ordering of attribute values across alternatives. This model is used to investigate the effect of attribute value ordering in three case studies related to alternative-fuel...... vehicles, mode choice, and route choice. In our application to choices among alternative-fuel vehicles, we see that especially the price coefficient is sensitive to changes in ordering. The ordering effect is also found in the applications to mode and route choice data where both travel time and cost...

  19. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  20. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  1. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  2. Numerical Modeling of Ophthalmic Response to Space

    Nelson, E. S.; Myers, J. G.; Mulugeta, L.; Vera, J.; Raykin, J.; Feola, A.; Gleason, R.; Samuels, B.; Ethier, C. R.

    2015-01-01

    To investigate ophthalmic changes in spaceflight, we would like to predict the impact of blood dysregulation and elevated intracranial pressure (ICP) on Intraocular Pressure (IOP). Unlike other physiological systems, there are very few lumped parameter models of the eye. The eye model described here is novel in its inclusion of the human choroid and retrobulbar subarachnoid space (rSAS), which are key elements in investigating the impact of increased ICP and ocular blood volume. Some ingenuity was required in modeling the blood and rSAS compartments due to the lack of quantitative data on essential hydrodynamic quantities, such as net choroidal volume and blood flowrate, inlet and exit pressures, and material properties, such as compliances between compartments.

  3. Item response analysis on an examination in anesthesiology for medical students in Taiwan: A comparison of one- and two-parameter logistic models

    Yu-Feng Huang

    2013-06-01

    Conclusion: Item response models are useful for medical test analyses and provide valuable information about model comparisons and identification of differential items other than test reliability, item difficulty, and examinee's ability.

  4. Spent fuel waste disposal: analyses of model uncertainty in the MICADO project

    Grambow, B.; Ferry, C.; Casas, I.; Bruno, J.; Quinones, J.; Johnson, L.

    2010-01-01

    The objective was to find out whether international research has now provided sufficiently reliable models to assess the corrosion behavior of spent fuel in groundwater and by this to contribute to answering the question whether the highly radioactive used fuel from nuclear reactors can be disposed of safely in a geological repository. Principal project results are described in the paper

  5. Analyses of gust fronts by means of limited area NWP model outputs

    Kašpar, Marek

    67-68, - (2003), s. 559-572 ISSN 0169-8095 R&D Projects: GA ČR GA205/00/1451 Institutional research plan: CEZ:AV0Z3042911 Keywords : gust front * limited area NWP model * output Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.012, year: 2003

  6. Analysing outsourcing policies in an asset management context : A six-stage model

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  7. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  8. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  9. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A; Basu, S.K.; Kumar, R.; Sarkar, A

    prediction when NCMRWF winds blended with MSMR winds are utilised in the wave model. A comparison between buoy and TOPEX wave heights of May 2000 at 4 buoy locations provides a good match, showing the merit of using altimeter data, wherever it is difficult...

  10. Automated analyses of model-driven artifacts : obtaining insights into industrial application of MDE

    Mengerink, J.G.M.; Serebrenik, A.; Schiffelers, R.R.H.; van den Brand, M.G.J.

    2017-01-01

    Over the past years, there has been an increase in the application of model driven engineering in industry. Similar to traditional software engineering, understanding how technologies are actually used in practice is essential for developing good tooling, and decision making processes.

  11. Bayesian Analysis of Multidimensional Item Response Theory Models: A Discussion and Illustration of Three Response Style Models

    Leventhal, Brian C.; Stone, Clement A.

    2018-01-01

    Interest in Bayesian analysis of item response theory (IRT) models has grown tremendously due to the appeal of the paradigm among psychometricians, advantages of these methods when analyzing complex models, and availability of general-purpose software. Possible models include models which reflect multidimensionality due to designed test structure,…

  12. Using species abundance distribution models and diversity indices for biogeographical analyses

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  13. Categorical regression dose-response modeling

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  14. Stochastic Load Models and Footbridge Response

    Pedersen, Lars; Frier, Christian

    2015-01-01

    Pedestrians may cause vibrations in footbridges and these vibrations may potentially be annoying. This calls for predictions of footbridge vibration levels and the paper considers a stochastic approach to modeling the action of pedestrians assuming walking parameters such as step frequency, pedes...

  15. Classical genetic analyses of responses to nicotine and ethanol in crosses derived from long- and short-sleep mice.

    de Fiebre, C M; Collins, A C

    1992-04-01

    A classical (Mendelian) genetic analysis of responses to ethanol and nicotine was conducted in crosses derived from mouse lines which were selectively bred for differential duration of loss of the righting response (sleep-time) after ethanol. Dose-response curves for these mice, the long- and short-sleep mouse lines, as well as the derived F1, F2 and backcross (F1 x long-sleep and F1 x short-sleep) generations were generated for several measures of nicotine and ethanol sensitivity. Ethanol sensitivity was assessed using the sleep-time measure. Nicotine sensitivity was tested using a battery of behavioral and physiological tests which included measures of seizure activity, respiration rate, acoustic startle response, Y-maze activities (both crossing and rearing activities), heart rate and body temperature. The inheritance of sensitivities to both of these agents appears to be polygenic and inheritance can be explained primarily by additive genetic effects with some epistasis. Sensitivity to the ethanol sleep-time measure was genetically correlated with sensitivity to both nicotine-induced hypothermia and seizures; the correlation was greater between sleep-time and hypothermia. These data indicate that there is overlap in the genetic regulation of sensitivity to both ethanol and nicotine as measured by some, but not all, tests.

  16. The use of differential item functioning analyses to identify cultural differences in responses to the EORTC QLQ-C30

    Scott, N. W.; Fayers, P. M.; Aaronson, N. K.; Bottomley, A.; de Graeff, A.; Groenvold, M.; Koller, M.; Petersen, M. A.; Sprangers, M. A. G.

    2007-01-01

    INTRODUCTION: The European Organisation for Research and Treatment of Cancer (EORTC) QLQ-C30 is a widely used health-related quality of life instrument. The main aim of this study is to investigate whether there are international differences in response to the questionnaire that can be explained by

  17. Transcriptome and Cell Physiological Analyses in Different Rice Cultivars Provide New Insights Into Adaptive and Salinity Stress Responses

    Elide Formentin

    2018-03-01

    Full Text Available Salinity tolerance has been extensively investigated in recent years due to its agricultural importance. Several features, such as the regulation of ionic transporters and metabolic adjustments, have been identified as salt tolerance hallmarks. Nevertheless, due to the complexity of the trait, the results achieved to date have met with limited success in improving the salt tolerance of rice plants when tested in the field, thus suggesting that a better understanding of the tolerance mechanisms is still required. In this work, differences between two varieties of rice with contrasting salt sensitivities were revealed by the imaging of photosynthetic parameters, ion content analysis and a transcriptomic approach. The transcriptomic analysis conducted on tolerant plants supported the setting up of an adaptive program consisting of sodium distribution preferentially limited to the roots and older leaves, and in the activation of regulatory mechanisms of photosynthesis in the new leaves. As a result, plants resumed grow even under prolonged saline stress. In contrast, in the sensitive variety, RNA-seq analysis revealed a misleading response, ending in senescence and cell death. The physiological response at the cellular level was investigated by measuring the intracellular profile of H2O2 in the roots, using a fluorescent probe. In the roots of tolerant plants, a quick response was observed with an increase in H2O2 production within 5 min after salt treatment. The expression analysis of some of the genes involved in perception, signal transduction and salt stress response confirmed their early induction in the roots of tolerant plants compared to sensitive ones. By inhibiting the synthesis of apoplastic H2O2, a reduction in the expression of these genes was detected. Our results indicate that quick H2O2 signaling in the roots is part of a coordinated response that leads to adaptation instead of senescence in salt-treated rice plants.

  18. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  19. Complementary modelling approaches for analysing several effects of privatization on electricity investment

    Bunn, D.W.; Larsen, E.R.; Vlahos, K. (London Business School (United Kingdom))

    1993-10-01

    Through the impacts of higher required rates of return, debt, taxation changes and a new competitive structure for the industry, investment in electricity generating capacity has taken a shift to less capital-intensive technologies in the UK. This paper reports on the use of large-scale, long-term capacity planning models, of both an optimization and system dynamics nature, to reflect these separate factors, investigate their sensitivities and to generate future scenarios for the investment in the industry. Some new policy implications for the regulation of the industry become apparent, but the main focus of the paper is to develop some of the methodological changes required by the planning models to suit the privatized context. (Author)

  20. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  1. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  2. Analysing the strength of friction stir welded dissimilar aluminium alloys using Sugeno Fuzzy model

    Barath, V. R.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir welding (FSW) is a promising solid state joining technique for aluminium alloys. In this study, FSW trials were conducted on two dissimilar plates of aluminium alloy AA2024 and AA7075 by varying the tool rotation speed (TRS) and welding speed (WS). Tensile strength (TS) of the joints were measured and a Sugeno - Fuzzy model was developed to interconnect the FSW process parameters with the tensile strength. From the developed model, it was observed that the optimum heat generation at WS of 15 mm.min-1 and TRS of 1050 rpm resulted in dynamic recovery and dynamic recrystallization of the material. This refined the grains in the FSW zone and resulted in peak tensile strength among the tested specimens. Crest parabolic trend was observed in tensile strength with variation of TRS from 900 rpm to 1200 rpm and TTS from 10 mm.min-1 to 20 mm.min-1.

  3. Analyses of the energy-dependent single separable potential models for the NN scattering

    Ahmad, S.S.; Beghi, L.

    1981-08-01

    Starting from a systematic study of the salient features regarding the quantum-mechanical two-particle scattering off an energy-dependent (ED) single separable potential and its connection with the rank-2 energy-independent (EI) separable potential in the T-(K-) amplitude formulation, the present status of the ED single separable potential models due to Tabakin (M1), Garcilazo (M2) and Ahmad (M3) has been discussed. It turned out that the incorporation of a self-consistent optimization procedure improves considerably the results of the 1 S 0 and 3 S 1 scattering phase shifts for the models (M2) and (M3) up to the CM wave number q=2.5 fm -1 , although the extrapolation of the results up to q=10 fm -1 reveals that the two models follow the typical behaviour of the well-known super-soft core potentials. It has been found that a variant of (M3) - i.e. (M4) involving one more parameter - gives the phase shifts results which are generally in excellent agreement with the data up to q=2.5 fm -1 and the extrapolation of the results for the 1 S 0 case in the higher wave number range not only follows the corresponding data qualitatively but also reflects a behaviour similar to the Reid soft core and Hamada-Johnston potentials together with a good agreement with the recent [4/3] Pade fits. A brief discussion regarding the features resulting from the variations in the ED parts of all the four models under consideration and their correlations with the inverse scattering theory methodology concludes the paper. (author)

  4. Analyses of Spring Barley Evapotranspiration Rates Based on Gradient Measurements and Dual Crop Coefficient Model

    Pozníková, Gabriela; Fischer, Milan; Pohanková, Eva; Trnka, Miroslav

    2014-01-01

    Roč. 62, č. 5 (2014), s. 1079-1086 ISSN 1211-8516 R&D Projects: GA MŠk LH12037; GA MŠk(CZ) EE2.3.20.0248 Institutional support: RVO:67179843 Keywords : evapotranspiration * dual crop coefficient model * Bowen ratio/energy balance method * transpiration * soil evaporation * spring barley Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7)

  5. Modeling and analyses of postulated UF6 release accidents in gaseous diffusion plant

    Kim, S.H.; Taleyarkhan, R.P.; Keith, K.D.; Schmidt, R.W.; Carter, J.C.; Dyer, R.H.

    1995-10-01

    Computer models have been developed to simulate the transient behavior of aerosols and vapors as a result of a postulated accident involving the release of uranium hexafluoride (UF 6 ) into the process building of a gaseous diffusion plant. UF 6 undergoes an exothermic chemical reaction with moisture (H 2 O) in the air to form hydrogen fluoride (HF) and radioactive uranyl fluoride (UO 2 F 2 ). As part of a facility-wide safety evaluation, this study evaluated source terms consisting of UO 2 F 2 as well as HF during a postulated UF 6 release accident in a process building. In the postulated accident scenario, ∼7900 kg (17,500 lb) of hot UF 6 vapor is released over a 5 min period from the process piping into the atmosphere of a large process building. UO 2 F 2 mainly remains as airborne-solid particles (aerosols), and HF is in a vapor form. Some UO 2 F 2 aerosols are removed from the air flow due to gravitational settling. The HF and the remaining UO 2 F 2 are mixed with air and exhausted through the building ventilation system. The MELCOR computer code was selected for simulating aerosols and vapor transport in the process building. MELCOR model was first used to develop a single volume representation of a process building and its results were compared with those from past lumped parameter models specifically developed for studying UF 6 release accidents. Preliminary results indicate that MELCOR predicted results (using a lumped formulation) are comparable with those from previously developed models

  6. Models and error analyses of measuring instruments in accountability systems in safeguards control

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  7. Analysing the uncertain future of copper with three exploratory system dynamics models

    Auping, W.; Pruyt, E.; Kwakkel, J.H.

    2012-01-01

    High copper prices, the prospect of a transition to a more sustainable energy mix and increasing copper demands from emerging economies have not led to an in-creased attention to the base metal copper in mineral scarcity discussions. The copper system is well documented, but especially regarding the demand of copper many uncertainties exist. In order to create insight in this systems behaviour in the coming 40 years, an Exploratory System Dynamics Modelling and Analysis study was performed. T...

  8. Stability, convergence and Hopf bifurcation analyses of the classical car-following model

    Kamath, Gopal Krishna; Jagannathan, Krishna; Raina, Gaurav

    2016-01-01

    Reaction delays play an important role in determining the qualitative dynamical properties of a platoon of vehicles traversing a straight road. In this paper, we investigate the impact of delayed feedback on the dynamics of the Classical Car-Following Model (CCFM). Specifically, we analyze the CCFM in no delay, small delay and arbitrary delay regimes. First, we derive a sufficient condition for local stability of the CCFM in no-delay and small-delay regimes using. Next, we derive the necessar...

  9. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before the......, or convince him-/herself about its strengths. The cards also contain aesthetical reflections that may be of inspiration in the development of the form....

  10. Large-scale inverse model analyses employing fast randomized data reduction

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  11. Distributed organization of a brain microcircuit analysed by three-dimensional modeling: the olfactory bulb

    Michele eMigliore

    2014-04-01

    Full Text Available The functional consequences of the laminar organization observed in cortical systems cannot be easily studied using standard experimental techniques, abstract theoretical representations, or dimensionally reduced models built from scratch. To solve this problem we have developed a full implementation of an olfactory bulb microcircuit using realistic three-dimensional inputs, cell morphologies, and network connectivity. The results provide new insights into the relations between the functional properties of individual cells and the networks in which they are embedded. To our knowledge, this is the first model of the mitral-granule cell network to include a realistic representation of the experimentally-recorded complex spatial patterns elicited in the glomerular layer by natural odor stimulation. Although the olfactory bulb, due to its organization, has unique advantages with respect to other brain systems, the method is completely general, and can be integrated with more general approaches to other systems. The model makes experimentally testable predictions on distributed processing and on the differential backpropagation of somatic action potentials in each lateral dendrite following odor learning, providing a powerful three-dimensional framework for investigating the functions of brain microcircuits.

  12. Drying of mint leaves in a solar dryer and under open sun: Modelling, performance analyses

    Akpinar, E. Kavak

    2010-01-01

    In this study was investigated the thin-layer drying characteristics in solar dryer with forced convection and under open sun with natural convection of mint leaves, and, performed energy analysis and exergy analysis of solar drying process of mint leaves. An indirect forced convection solar dryer consisting of a solar air collector and drying cabinet was used in the experiments. The drying data were fitted to ten the different mathematical models. Among the models, Wang and Singh model for the forced solar drying and the natural sun drying were found to best explain thin-layer drying behaviour of mint leaves. Using the first law of thermodynamics, the energy analysis throughout solar drying process was estimated. However, exergy analysis during solar drying process was determined by applying the second law of thermodynamics. Energy utilization ratio (EUR) values of drying cabinet varied in the ranges between 7.826% and 46.285%. The values of exergetic efficiency were found to be in the range of 34.760-87.717%. The values of improvement potential varied between 0 and 0.017 kJ s -1 . Energy utilization ratio and improvement potential decreased with increasing drying time and ambient temperature while exergetic efficiency increased.

  13. Bag-model analyses of proton-antiproton scattering and atomic bound states

    Alberg, M.A.; Freedman, R.A.; Henley, E.M.; Hwang, W.P.; Seckel, D.; Wilets, L.

    1983-01-01

    We study proton-antiproton (pp-bar ) scattering using the static real potential of Bryan and Phillips outside a cutoff radius rsub0 and two different shapes for the imaginary potential inside a radius R*. These forms, motivated by bag models, are a one-gluon-annihilation potential and a simple geometric-overlap form. In both cases there are three adjustable parameters: the effective bag radius R*, the effective strong coupling constant αsubssup*, and rsub0. There is also a choice for the form of the real potential inside the cutoff radius rsub0. Analysis of the pp-bar scattering data in the laboratory-momentum region 0.4--0.7 GeV/c yields an effective nucleon bag radius R* in the range 0.6--1.1 fm, with the best fit obtained for R* = 0.86 fm. Arguments are presented that the deduced value of R* is likely to be an upper bound on the isolated nucleon bag radius. The present results are consistent with the range of bag radii in current bag models. We have also used the resultant optical potential to calculate the shifts and widths of the sup3Ssub1 and sup1Ssub0 atomic bound states of the pp-bar system. For both states we find upward (repulsive) shifts and widths of about 1 keV. We find no evidence for narrow, strongly bound pp-bar states in our potential model

  14. Preliminary sensitivity analyses of corrosion models for BWIP [Basalt Waste Isolation Project] container materials

    Anantatmula, R.P.

    1984-01-01

    A preliminary sensitivity analysis was performed for the corrosion models developed for Basalt Waste Isolation Project container materials. The models describe corrosion behavior of the candidate container materials (low carbon steel and Fe9Cr1Mo), in various environments that are expected in the vicinity of the waste package, by separate equations. The present sensitivity analysis yields an uncertainty in total uniform corrosion on the basis of assumed uncertainties in the parameters comprising the corrosion equations. Based on the sample scenario and the preliminary corrosion models, the uncertainty in total uniform corrosion of low carbon steel and Fe9Cr1Mo for the 1000 yr containment period are 20% and 15%, respectively. For containment periods ≥ 1000 yr, the uncertainty in corrosion during the post-closure aqueous periods controls the uncertainty in total uniform corrosion for both low carbon steel and Fe9Cr1Mo. The key parameters controlling the corrosion behavior of candidate container materials are temperature, radiation, groundwater species, etc. Tests are planned in the Basalt Waste Isolation Project containment materials test program to determine in detail the sensitivity of corrosion to these parameters. We also plan to expand the sensitivity analysis to include sensitivity coefficients and other parameters in future studies. 6 refs., 3 figs., 9 tabs

  15. Analysing black phosphorus transistors using an analytic Schottky barrier MOSFET model.

    Penumatcha, Ashish V; Salazar, Ramon B; Appenzeller, Joerg

    2015-11-13

    Owing to the difficulties associated with substitutional doping of low-dimensional nanomaterials, most field-effect transistors built from carbon nanotubes, two-dimensional crystals and other low-dimensional channels are Schottky barrier MOSFETs (metal-oxide-semiconductor field-effect transistors). The transmission through a Schottky barrier-MOSFET is dominated by the gate-dependent transmission through the Schottky barriers at the metal-to-channel interfaces. This makes the use of conventional transistor models highly inappropriate and has lead researchers in the past frequently to extract incorrect intrinsic properties, for example, mobility, for many novel nanomaterials. Here we propose a simple modelling approach to quantitatively describe the transfer characteristics of Schottky barrier-MOSFETs from ultra-thin body materials accurately in the device off-state. In particular, after validating the model through the analysis of a set of ultra-thin silicon field-effect transistor data, we have successfully applied our approach to extract Schottky barrier heights for electrons and holes in black phosphorus devices for a large range of body thicknesses.

  16. Thermal conductivity degradation analyses of LWR MOX fuel by the quasi-two phase material model

    Kosaka, Yuji; Kurematsu, Shigeru; Kitagawa, Takaaki; Suzuki, Akihiro; Terai, Takayuki

    2012-01-01

    The temperature measurements of mixed oxide (MOX) and UO 2 fuels during irradiation suggested that the thermal conductivity degradation rate of the MOX fuel with burnup should be slower than that of the UO 2 fuel. In order to explain the difference of the degradation rates, the quasi-two phase material model is proposed to assess the thermal conductivity degradation of the MIMAS MOX fuel, which takes into account the Pu agglomerate distributions in the MOX fuel matrix as fabricated. As a result, the quasi-two phase model calculation shows the gradual increase of the difference with burnup and may expect more than 10% higher thermal conductivity values around 75 GWd/t. While these results are not fully suitable for thermal conductivity degradation models implemented by some industrial fuel manufacturers, they are consistent with the results from the irradiation tests and indicate that the inhomogeneity of Pu content in the MOX fuel can be one of the major reasons for the moderation of the thermal conductivity degradation of the MOX fuel. (author)

  17. Analysing the origin of long-range interactions in proteins using lattice models

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  18. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A. [Max-Planck-Institut für Plasmaphysik, Garching (Germany)

    2015-07-15

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  19. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 2

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-10-01

    Model 2 in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. Both the cylinder and the nozzle of model 2 had outside diameters of 10 in., giving a d 0 /D 0 ratio of 1.0, and both had outside diameter/thickness ratios of 100. Sixteen separate loading cases in which one end of the cylinder was rigidly held were analyzed. An internal pressure loading, three mutually perpendicular force components, and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. In addition to these 13 loadings, 3 additional loads were applied to the nozzle (in-plane bending moment, out-of-plane bending moment, and axial force) with the free end of the cylinder restrained. The experimental stress distributions for each of the 16 loadings were obtained using 152 three-gage strain rosettes located on the inner and outer surfaces. All the 16 loading cases were also analyzed theoretically using a finite-element shell analysis. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good general agreement, and it is felt that the analysis would be satisfactory for most engineering purposes. (auth)

  20. Improved bolt models for use in global analyses of storage and transportation casks subject to extra-regulatory loading

    Kalan, R.J.; Ammerman, D.J.; Gwinn, K.W.

    2004-01-01

    Transportation and storage casks subjected to extra-regulatory loadings may experience large stresses and strains in key structural components. One of the areas susceptible to these large stresses and strains is the bolted joint retaining any closure lid on an overpack or a canister. Modeling this joint accurately is necessary in evaluating the performance of the cask under extreme loading conditions. However, developing detailed models of a bolt in a large cask finite element model can dramatically increase the computational time, making the analysis prohibitive. Sandia National Laboratories used a series of calibrated, detailed, bolt finite element sub-models to develop a modified-beam bolt-model in order to examine the response of a storage cask and closure to severe accident loadings. The initial sub-models were calibrated for tension and shear loading using test data for large diameter bolts. Next, using the calibrated test model, sub-models of the actual joints were developed to obtain force-displacement curves and failure points for the bolted joint. These functions were used to develop a modified beam element representation of the bolted joint, which could be incorporated into the larger cask finite element model. This paper will address the modeling and assumptions used for the development of the initial calibration models, the joint sub-models and the modified beam model

  1. Modeling of concrete response at high temperature

    Pfeiffer, P.; Marchertas, A.

    1984-01-01

    A rate-type creep law is implemented into the computer code TEMP-STRESS for high temperature concrete analysis. The disposition of temperature, pore pressure and moisture for the particular structure in question is provided as input for the thermo-mechanical code. The loss of moisture from concrete also induces material shrinkage which is accounted for in the analytical model. Examples are given to illustrate the numerical results

  2. Empirical Model Development for Predicting Shock Response on Composite Materials Subjected to Pyroshock Loading

    Gentz, Steven J.; Ordway, David O; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.

    2015-01-01

    The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approx. 9 inches from the source) dominated by direct wave propagation, mid-field environment (approx. 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This report documents the outcome of the assessment.

  3. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  4. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  5. South and North: DIF Analyses of University-Student Responses to the Emotional Skills and Competence Questionnaire

    Bo Molander

    2011-12-01

    Full Text Available In a study of the Emotional Skills and Competence Questionnaire instrument (ESCQ; Takšić, 1998 three samples of university students from Balkan countries (Croatia, Serbia, and Slovenia were contrasted with two samples of university students from Nordic countries (Finland and Sweden. In total, 1978 students participated. Effects of country and gender were obtained from the ESCQ total scores, as well as from the subscale scores. The subsequent analyses of item bias, that is, differential item functioning (DIF, revealed a number of DIF items in pair wise comparisons of the samples, thus creating doubts about the fairness in comparing mean scores. Further analyses of the DIF items showed, however, that most of the item curve functions were uniform, and that effect sizes were low. It was also shown that the number of DIF items depended on which countries were compared. Spearman correlations between measures of number of DIF items and cultural values as measured by World Value Survey data were very high. Implications of these findings for future cross-cultural studies of the ESCQ instrument are discussed.

  6. Functionally unidimensional item response models for multivariate binary data

    Ip, Edward; Molenberghs, Geert; Chen, Shyh-Huei

    2013-01-01

    The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model to such multidimensio......The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model...... to such multidimensional data is believed to result in ability estimates that represent a combination of the major and minor dimensions. We conjecture that the underlying dimension for the fitted unidimensional model, which we call the functional dimension, represents a nonlinear projection. In this article we investigate...... tool. An example regarding a construct of desire for physical competency is used to illustrate the functional unidimensional approach....

  7. Response of rainbow trout transcriptome to model chemical contaminants

    Koskinen, Heikki; Pehkonen, Petri; Vehniaeinen, Eeva; Krasnov, Aleksei; Rexroad, Caird; Afanasyev, Sergey; Moelsa, Hannu; Oikari, Aimo

    2004-01-01

    We used high-density cDNA microarray in studies of responses of rainbow trout fry at sublethal ranges of β-naphthoflavone, cadmium, carbon tetrachloride, and pyrene. The differentially expressed genes were grouped by the functional categories of Gene Ontology. Significantly different response to the studied compounds was shown by a number of classes, such as cell cycle, apoptosis, signal transduction, oxidative stress, subcellular and extracellular structures, protein biosynthesis, and modification. Cluster analysis separated responses to the contaminants at low and medium doses, whereas at high levels the adaptive reactions were masked with general unspecific response to toxicity. We found enhanced expression of many mitochondrial proteins as well as genes involved in metabolism of metal ions and protein biosynthesis. In parallel, genes related to stress and immune response, signal transduction, and nucleotide metabolism were down-regulated. We performed computer-assisted analyses of Medline abstracts retrieved for each compound, which helped us to indicate the expected and novel findings

  8. Bayesian Dimensionality Assessment for the Multidimensional Nominal Response Model

    Javier Revuelta

    2017-06-01

    Full Text Available This article introduces Bayesian estimation and evaluation procedures for the multidimensional nominal response model. The utility of this model is to perform a nominal factor analysis of items that consist of a finite number of unordered response categories. The key aspect of the model, in comparison with traditional factorial model, is that there is a slope for each response category on the latent dimensions, instead of having slopes associated to the items. The extended parameterization of the multidimensional nominal response model requires large samples for estimation. When sample size is of a moderate or small size, some of these parameters may be weakly empirically identifiable and the estimation algorithm may run into difficulties. We propose a Bayesian MCMC inferential algorithm to estimate the parameters and the number of dimensions underlying the multidimensional nominal response model. Two Bayesian approaches to model evaluation were compared: discrepancy statistics (DIC, WAICC, and LOO that provide an indication of the relative merit of different models, and the standardized generalized discrepancy measure that requires resampling data and is computationally more involved. A simulation study was conducted to compare these two approaches, and the results show that the standardized generalized discrepancy measure can be used to reliably estimate the dimensionality of the model whereas the discrepancy statistics are questionable. The paper also includes an example with real data in the context of learning styles, in which the model is used to conduct an exploratory factor analysis of nominal data.

  9. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  10. Diffusion model analyses of the experimental data of 12C+27Al, 40Ca dissipative collisions

    SHEN Wen-qing; QIAO Wei-min; ZHU Yong-tai; ZHAN Wen-long

    1985-01-01

    Assuming that the intermediate system decays with a statistical lifetime, the general behavior of the threefold differential cross section d 3 tau/dZdEdtheta in the dissipative collisions of 68 MeV 12 C+ 27 Al and 68.6 MeV 12 C+ 40 Ca system is analyzed in the diffusion model framework. The lifetime of the intermediate system and the separation distance for the completely damped deep-inelastic component are obtained. The calculated results and the experimental data of the angular distributions and Wilczynski plots are compared. The probable reasons for the differences between them are briefly discussed

  11. Domain analyses of Usher syndrome causing Clarin-1 and GPR98 protein models.

    Khan, Sehrish Haider; Javed, Muhammad Rizwan; Qasim, Muhammad; Shahzadi, Samar; Jalil, Asma; Rehman, Shahid Ur

    2014-01-01

    Usher syndrome is an autosomal recessive disorder that causes hearing loss, Retinitis Pigmentosa (RP) and vestibular dysfunction. It is clinically and genetically heterogeneous disorder which is clinically divided into three types i.e. type I, type II and type III. To date, there are about twelve loci and ten identified genes which are associated with Usher syndrome. A mutation in any of these genes e.g. CDH23, CLRN1, GPR98, MYO7A, PCDH15, USH1C, USH1G, USH2A and DFNB31 can result in Usher syndrome or non-syndromic deafness. These genes provide instructions for making proteins that play important roles in normal hearing, balance and vision. Studies have shown that protein structures of only seven genes have been determined experimentally and there are still three genes whose structures are unavailable. These genes are Clarin-1, GPR98 and Usherin. In the absence of an experimentally determined structure, homology modeling and threading often provide a useful 3D model of a protein. Therefore in the current study Clarin-1 and GPR98 proteins have been analyzed for signal peptide, domains and motifs. Clarin-1 protein was found to be without any signal peptide and consists of prokar lipoprotein domain. Clarin-1 is classified within claudin 2 super family and consists of twelve motifs. Whereas, GPR98 has a 29 amino acids long signal peptide and classified within GPCR family 2 having Concanavalin A-like lectin/glucanase superfamily. It was found to be consists of GPS and G protein receptor F2 domains and twenty nine motifs. Their 3D structures have been predicted using I-TASSER server. The model of Clarin-1 showed only α-helix but no beta sheets while model of GPR98 showed both α-helix and β sheets. The predicted structures were then evaluated and validated by MolProbity and Ramachandran plot. The evaluation of the predicted structures showed 78.9% residues of Clarin-1 and 78.9% residues of GPR98 within favored regions. The findings of present study has resulted in the

  12. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  13. A Conceptual Model for Analysing Collaborative Work and Products in Groupware Systems

    Duque, Rafael; Bravo, Crescencio; Ortega, Manuel

    Collaborative work using groupware systems is a dynamic process in which many tasks, in different application domains, are carried out. Currently, one of the biggest challenges in the field of CSCW (Computer-Supported Cooperative Work) research is to establish conceptual models which allow for the analysis of collaborative activities and their resulting products. In this article, we propose an ontology that conceptualizes the required elements which enable an analysis to infer a set of analysis indicators, thus evaluating both the individual and group work and the artefacts which are produced.

  14. A rate equation model of stomatal responses to vapour pressure deficit and drought

    Shanahan ST

    2002-08-01

    Full Text Available Abstract Background Stomata respond to vapour pressure deficit (D – when D increases, stomata begin to close. Closure is the result of a decline in guard cell turgor, but the link between D and turgor is poorly understood. We describe a model for stomatal responses to increasing D based upon cellular water relations. The model also incorporates impacts of increasing levels of water stress upon stomatal responses to increasing D. Results The model successfully mimics the three phases of stomatal responses to D and also reproduces the impact of increasing plant water deficit upon stomatal responses to increasing D. As water stress developed, stomata regulated transpiration at ever decreasing values of D. Thus, stomatal sensitivity to D increased with increasing water stress. Predictions from the model concerning the impact of changes in cuticular transpiration upon stomatal responses to increasing D are shown to conform to experimental data. Sensitivity analyses of stomatal responses to various parameters of the model show that leaf thickness, the fraction of leaf volume that is air-space, and the fraction of mesophyll cell wall in contact with air have little impact upon behaviour of the model. In contrast, changes in cuticular conductance and membrane hydraulic conductivity have significant impacts upon model behaviour. Conclusion Cuticular transpiration is an important feature of stomatal responses to D and is the cause of the 3 phase response to D. Feed-forward behaviour of stomata does not explain stomatal responses to D as feedback, involving water loss from guard cells, can explain these responses.

  15. Comparative proteomic analyses reveal the proteome response to short-term drought in Italian ryegrass (Lolium multiflorum.

    Ling Pan

    Full Text Available Drought is a major abiotic stress that impairs growth and productivity of Italian ryegrass. Comparative analysis of drought responsive proteins will provide insight into molecular mechanism in Lolium multiflorum drought tolerance. Using the iTRAQ-based approach, proteomic changes in tolerant and susceptible lines were examined in response to drought condition. A total of 950 differentially accumulated proteins was found to be involved in carbohydrate metabolism, amino acid metabolism, biosynthesis of secondary metabolites, and signal transduction pathway, such as β-D-xylosidase, β-D-glucan glucohydrolase, glycerate dehydrogenase, Cobalamin-independent methionine synthase, glutamine synthetase 1a, Farnesyl pyrophosphate synthase, diacylglycerol, and inositol 1, 4, 5-trisphosphate, which might contributed to enhance drought tolerance or adaption in Lolium multiflorum. Interestingly, the two specific metabolic pathways, arachidonic acid and inositol phosphate metabolism including differentially accumulated proteins, were observed only in the tolerant lines. Cysteine protease cathepsin B, Cysteine proteinase, lipid transfer protein and Aquaporin were observed as drought-regulated proteins participating in hydrolysis and transmembrane transport. The activities of phospholipid hydroperoxide glutathione peroxidase, peroxiredoxin, dehydroascorbate reductase, peroxisomal ascorbate peroxidase and monodehydroascorbate reductase associated with alleviating the accumulation of reactive oxygen species in stress inducing environments. Our results showed that drought-responsive proteins were closely related to metabolic processes including signal transduction, antioxidant defenses, hydrolysis, and transmembrane transport.

  16. Transcriptomic analyses on muscle tissues of Litopenaeus vannamei provide the first profile insight into the response to low temperature stress.

    Wen Huang

    Full Text Available The Pacific white shrimp (Litopenaeus vannamei is an important cultured crustacean species worldwide. However, little is known about the molecular mechanism of this species involved in the response to cold stress. In this study, four separate RNA-Seq libraries of L. vannamei were generated from 13°C stress and control temperature. Total 29,662 of Unigenes and overall of 19,619 annotated genes were obtained. Three comparisons were carried out among the four libraries, in which 72 of the top 20% of differentially-expressed genes were obtained, 15 GO and 5 KEGG temperature-sensitive pathways were fished out. Catalytic activity (GO: 0003824 and Metabolic pathways (ko01100 were the most annotated GO and KEGG pathways in response to cold stress, respectively. In addition, Calcium, MAPK cascade, Transcription factor and Serine/threonine-protein kinase signal pathway were picked out and clustered. Serine/threonine-protein kinase signal pathway might play more important roles in cold adaptation, while other three signal pathway were not widely transcribed. Our results had summarized the differentially-expressed genes and suggested the major important signaling pathways and related genes. These findings provide the first profile insight into the molecular basis of L. vannamei response to cold stress.

  17. Leveraging First Response Time into the Knowledge Tracing Model

    Wang, Yutao; Heffernan, Neil T.

    2012-01-01

    The field of educational data mining has been using the Knowledge Tracing model, which only look at the correctness of student first response, for tracking student knowledge. Recently, lots of other features are studied to extend the Knowledge Tracing model to better model student knowledge. The goal of this paper is to analyze whether or not the…

  18. Impulse-response analysis of the market share attraction model

    D. Fok (Dennis); Ph.H.B.F. Franses (Philip Hans)

    1999-01-01

    textabstractWe propose a simulation-based technique to calculate impulse-response functions and their confidence intervals in a market share attraction model [MCI]. As an MCI model implies a reduced form model for the logs of relative market shares, simulation techniques have to be used to obtain

  19. Response spectrum analysis of a stochastic seismic model

    Kimura, Koji; Sakata, Masaru; Takemoto, Shinichiro.

    1990-01-01

    The stochastic response spectrum approach is presented for predicting the dynamic behavior of structures to earthquake excitation expressed by a random process, one of whose sample functions can be regarded as a recorded strong-motion earthquake accelerogram. The approach consists of modeling recorded ground motion by a random process and the root-mean-square response (rms) analysis of a single-degree-of-freedom system by using the moment equations method. The stochastic response spectrum is obtained as a plot of the maximum rms response versus the natural period of the system and is compared with the conventional response spectrum. (author)

  20. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  1. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  2. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.

    Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E

    2015-09-03

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Innovative three-dimensional neutronics analyses directly coupled with cad models of geometrically complex fusion systems

    Sawan, M.; Wilson, P.; El-Guebaly, L.; Henderson, D.; Sviatoslavsky, G.; Bohm, T.; Kiedrowski, B.; Ibrahim, A.; Smith, B.; Slaybaugh, R.; Tautges, T.

    2007-01-01

    Fusion systems are, in general, geometrically complex requiring detailed three-dimensional (3-D) nuclear analysis. This analysis is required to address tritium self-sufficiency, nuclear heating, radiation damage, shielding, and radiation streaming issues. To facilitate such calculations, we developed an innovative computational tool that is based on the continuous energy Monte Carlo code MCNP and permits the direct use of CAD-based solid models in the ray-tracing. This allows performing the neutronics calculations in a model that preserves the geometrical details without any simplification, eliminates possible human error in modeling the geometry for MCNP, and allows faster design iterations. In addition to improving the work flow for simulating complex 3- D geometries, it allows a richer representation of the geometry compared to the standard 2nd order polynomial representation. This newly developed tool has been successfully tested for a detailed 40 degree sector benchmark of the International Thermonuclear Experimental Reactor (ITER). The calculations included determining the poloidal variation of the neutron wall loading, flux and nuclear heating in the divertor components, nuclear heating in toroidal field coils, and radiation streaming in the mid-plane port. The tool has been applied to perform 3-D nuclear analysis for several fusion designs including the ARIES Compact Stellarator (ARIES-CS), the High Average Power Laser (HAPL) inertial fusion power plant, and ITER first wall/shield (FWS) modules. The ARIES-CS stellarator has a first wall shape and a plasma profile that varies toroidally within each field period compared to the uniform toroidal shape in tokamaks. Such variation cannot be modeled analytically in the standard MCNP code. The impact of the complex helical geometry and the non-uniform blanket and divertor on the overall tritium breeding ratio and total nuclear heating was determined. In addition, we calculated the neutron wall loading variation in

  4. Integrated expression profiling and ChIP-seq analyses of the growth inhibition response program of the androgen receptor.

    Biaoyang Lin

    2009-08-01

    Full Text Available The androgen receptor (AR plays important roles in the development of male phenotype and in different human diseases including prostate cancers. The AR can act either as a promoter or a tumor suppressor depending on cell types. The AR proliferative response program has been well studied, but its prohibitive response program has not yet been thoroughly studied.Previous studies found that PC3 cells expressing the wild-type AR inhibit growth and suppress invasion. We applied expression profiling to identify the response program of PC3 cells expressing the AR (PC3-AR under different growth conditions (i.e. with or without androgens and at different concentration of androgens and then applied the newly developed ChIP-seq technology to identify the AR binding regions in the PC3 cancer genome. A surprising finding was that the comparison of MOCK-transfected PC3 cells with AR-transfected cells identified 3,452 differentially expressed genes (two fold cutoff even without the addition of androgens (i.e. in ethanol control, suggesting that a ligand independent activation or extremely low-level androgen activation of the AR. ChIP-Seq analysis revealed 6,629 AR binding regions in the cancer genome of PC3 cells with an FDR (false discovery rate cut off of 0.05. About 22.4% (638 of 2,849 can be mapped to within 2 kb of the transcription start site (TSS. Three novel AR binding motifs were identified in the AR binding regions of PC3-AR cells, and two of them share a core consensus sequence CGAGCTCTTC, which together mapped to 27.3% of AR binding regions (1,808/6,629. In contrast, only about 2.9% (190/6,629 of AR binding sites contains the canonical AR matrix M00481, M00447 and M00962 (from the Transfac database, which is derived mostly from AR proliferative responsive genes in androgen dependent cells. In addition, we identified four top ranking co-occupancy transcription factors in the AR binding regions, which include TEF1 (Transcriptional enhancer factor

  5. Fitting Diffusion Item Response Theory Models for Responses and Response Times Using the R Package diffIRT

    Dylan Molenaar

    2015-08-01

    Full Text Available In the psychometric literature, item response theory models have been proposed that explicitly take the decision process underlying the responses of subjects to psychometric test items into account. Application of these models is however hampered by the absence of general and flexible software to fit these models. In this paper, we present diffIRT, an R package that can be used to fit item response theory models that are based on a diffusion process. We discuss parameter estimation and model fit assessment, show the viability of the package in a simulation study, and illustrate the use of the package with two datasets pertaining to extraversion and mental rotation. In addition, we illustrate how the package can be used to fit the traditional diffusion model (as it has been originally developed in experimental psychology to data.

  6. ANALYSES OF GENETIC VARIABILITY IN LENTINULA EDODES THROUGH MYCELIA RESPONSES TO DIFFERENT ABIOTIC CONDITIONS AND RAPD MOLECULAR MARKERS

    Maki Cristina Sayuri

    2001-01-01

    Full Text Available The growth of thirty-four Lentinula edodes strains submitted to different mycelial cultivation conditions (pH and temperature was evaluated and strain variability was assessed by RAPD molecular markers. The growth at three pH values (5, 6 and 7 and four different temperatures (16, 25, 28 and 37ºC was measured using the in vitro mycelial development rate and water retention as parameters. Mycelial cultivation was successful at all pH tested, while the ideal temperature for mycelial cultivation ranged between 25 and 28ºC. The water content was lower in strains grown at 37ºC. Among 20 OPA primers (Operon Technologies, Inc. used for the RAPD analyses, seventeen presented good polymorphism (OPA01 to OPA05, OPA07 to OPA14, OPA17 to OPA20. The clustering based on similarity coefficients allowed the separation of strain in two groups with different geographic origins.

  7. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous database describing plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on

  8. Model-based analyses to compare health and economic outcomes of cancer control: inclusion of disparities.

    Goldie, Sue J; Daniels, Norman

    2011-09-21

    Disease simulation models of the health and economic consequences of different prevention and treatment strategies can guide policy decisions about cancer control. However, models that also consider health disparities can identify strategies that improve both population health and its equitable distribution. We devised a typology of cancer disparities that considers types of inequalities among black, white, and Hispanic populations across different cancers and characteristics important for near-term policy discussions. We illustrated the typology in the specific example of cervical cancer using an existing disease simulation model calibrated to clinical, epidemiological, and cost data for the United States. We calculated average reduction in cancer incidence overall and for black, white, and Hispanic women under five different prevention strategies (Strategies A1, A2, A3, B, and C) and estimated average costs and life expectancy per woman, and the cost-effectiveness ratio for each strategy. Strategies that may provide greater aggregate health benefit than existing options may also exacerbate disparities. Combining human papillomavirus vaccination (Strategy A2) with current cervical cancer screening patterns (Strategy A1) resulted in an average reduction of 69% in cancer incidence overall but a 71.6% reduction for white women, 68.3% for black women, and 63.9% for Hispanic women. Other strategies targeting risk-based screening to racial and ethnic minorities reduced disparities among racial subgroups and resulted in more equitable distribution of benefits among subgroups (reduction in cervical cancer incidence, white vs. Hispanic women, 69.7% vs. 70.1%). Strategies that employ targeted risk-based screening and new screening algorithms, with or without vaccination (Strategies B and C), provide excellent value. The most effective strategy (Strategy C) had a cost-effectiveness ratio of $28,200 per year of life saved when compared with the same strategy without

  9. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  10. Monte Carlo modeling and analyses of YALINA- booster subcritical assembly Part II: pulsed neutron source

    Talamo, A.; Gohar, M.Y.A.; Rabiti, C.

    2008-01-01

    One of the most reliable experimental methods for measuring the kinetic parameters of a subcritical assembly is the Sjoestrand method applied to the reaction rate generated from a pulsed neutron source. This study developed a new analytical methodology for characterizing the kinetic parameters of a subcritical assembly using the Sjoestrand method, which allows comparing the analytical and experimental time dependent reaction rates and the reactivity measurements. In this methodology, the reaction rate, detector response, is calculated due to a single neutron pulse using MCNP/MCNPX computer code or any other neutron transport code that explicitly simulates the fission delayed neutrons. The calculation simulates a single neutron pulse over a long time period until the delayed neutron contribution to the reaction is vanished. The obtained reaction rate is superimposed to itself, with respect to the time, to simulate the repeated pulse operation until the asymptotic level of the reaction rate, set by the delayed neutrons, is achieved. The superimposition of the pulse to itself was calculated by a simple C computer program. A parallel version of the C program is used due to the large amount of data being processed, e.g. by the Message Passing Interface (MPI). The new calculation methodology has shown an excellent agreement with the experimental results available from the YALINA-Booster facility of Belarus. The facility has been driven by a Deuterium-Deuterium or Deuterium-Tritium pulsed neutron source and the (n,p) reaction rate has been experimentally measured by a 3 He detector. The MCNP calculation has utilized the weight window and delayed neutron biasing variance reduction techniques since the detector volume is small compared to the assembly volume. Finally, this methodology was used to calculate the IAEA benchmark of the YALINA-Booster experiment

  11. Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving

    Rakkapao, S.; Prasitpong, S.

    2018-03-01

    This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.

  12. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  13. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...... headway time regularity and running time variability, i.e. taking into account waiting time and in-vehicle time. The approach was applied on a case study by assessing the effects of implementing segregated infrastructure and APTS elements, individually and in combination. The results showed...... that the reliability of on-street public transport operations mainly depends on APTS elements, and especially holding strategies, whereas pure infrastructure improvements induced travel time reductions. The results further suggested that synergy effects can be obtained by planning on-street public transport coherently...

  14. Analysing pseudoephedrine/methamphetamine policy options in Australia using multi-criteria decision modelling.

    Manning, Matthew; Wong, Gabriel T W; Ransley, Janet; Smith, Christine

    2016-06-01

    In this paper we capture and synthesize the unique knowledge of experts so that choices regarding policy measures to address methamphetamine consumption and dependency in Australia can be strengthened. We examine perceptions of the: (1) influence of underlying factors that impact on the methamphetamine problem; (2) importance of various models of intervention that have the potential to affect the success of policies; and (3) efficacy of alternative pseudoephedrine policy options. We adopt a multi-criteria decision model to unpack factors that affect decisions made by experts and examine potential variations on weight/preference among groups. Seventy experts from five groups (i.e. academia (18.6%), government and policy (27.1%), health (18.6%), pharmaceutical (17.1%) and police (18.6%)) in Australia participated in the survey. Social characteristics are considered the most important underlying factor, prevention the most effective strategy and Project STOP the most preferred policy option with respect to reducing methamphetamine consumption and dependency in Australia. One-way repeated ANOVAs indicate a statistically significant difference with regards to the influence of underlying factors (F(2.3, 144.5)=11.256, pmethamphetamine consumption and dependency. Most experts support the use of preventative mechanisms to inhibit drug initiation and delayed drug uptake. Compared to other policies, Project STOP (which aims to disrupt the initial diversion of pseudoephedrine) appears to be a more preferable preventative mechanism to control the production and subsequent sale and use of methamphetamine. This regulatory civil law lever engages third parties in controlling drug-related crime. The literature supports third-party partnerships as it engages experts who have knowledge and expertise with respect to prevention and harm minimization. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  16. A Generalized QMRA Beta-Poisson Dose-Response Model.

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with K min = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model. © 2016 Society for Risk Analysis.

  17. Identification and expression analyses of WRKY genes reveal their involvement in growth and abiotic stress response in watermelon (Citrullus lanatus).

    Yang, Xiaozhen; Li, Hao; Yang, Yongchao; Wang, Yongqi; Mo, Yanling; Zhang, Ruimin; Zhang, Yong; Ma, Jianxiang; Wei, Chunhua; Zhang, Xian

    2018-01-01

    Despite identification of WRKY family genes in numerous plant species, a little is known about WRKY genes in watermelon, one of the most economically important fruit crops around the world. Here, we identified a total of 63 putative WRKY genes in watermelon and classified them into three major groups (I-III) and five subgroups (IIa-IIe) in group II. The structure analysis indicated that ClWRKYs with different WRKY domains or motifs may play different roles by regulating respective target genes. The expressions of ClWRKYs in different tissues indicate that they are involved in various tissue growth and development. Furthermore, the diverse responses of ClWRKYs to drought, salt, or cold stress suggest that they positively or negatively affect plant tolerance to various abiotic stresses. In addition, the altered expression patterns of ClWRKYs in response to phytohormones such as, ABA, SA, MeJA, and ETH, imply the occurrence of complex cross-talks between ClWRKYs and plant hormone signals in regulating plant physiological and biological processes. Taken together, our findings provide valuable clues to further explore the function and regulatory mechanisms of ClWRKY genes in watermelon growth, development, and adaption to environmental stresses.

  18. Identification and expression analyses of WRKY genes reveal their involvement in growth and abiotic stress response in watermelon (Citrullus lanatus.

    Xiaozhen Yang

    Full Text Available Despite identification of WRKY family genes in numerous plant species, a little is known about WRKY genes in watermelon, one of the most economically important fruit crops around the world. Here, we identified a total of 63 putative WRKY genes in watermelon and classified them into three major groups (I-III and five subgroups (IIa-IIe in group II. The structure analysis indicated that ClWRKYs with different WRKY domains or motifs may play different roles by regulating respective target genes. The expressions of ClWRKYs in different tissues indicate that they are involved in various tissue growth and development. Furthermore, the diverse responses of ClWRKYs to drought, salt, or cold stress suggest that they positively or negatively affect plant tolerance to various abiotic stresses. In addition, the altered expression patterns of ClWRKYs in response to phytohormones such as, ABA, SA, MeJA, and ETH, imply the occurrence of complex cross-talks between ClWRKYs and plant hormone signals in regulating plant physiological and biological processes. Taken together, our findings provide valuable clues to further explore the function and regulatory mechanisms of ClWRKY genes in watermelon growth, development, and adaption to environmental stresses.

  19. Application of Entropy and Fractal Dimension Analyses to the Pattern Recognition of Contaminated Fish Responses in Aquaculture

    Harkaitz Eguiraun

    2014-11-01

    Full Text Available The objective of the work was to develop a non-invasive methodology for image acquisition, processing and nonlinear trajectory analysis of the collective fish response to a stochastic event. Object detection and motion estimation were performed by an optical flow algorithm in order to detect moving fish and simultaneously eliminate background, noise and artifacts. The Entropy and the Fractal Dimension (FD of the trajectory followed by the centroids of the groups of fish were calculated using Shannon and permutation Entropy and the Katz, Higuchi and Katz-Castiglioni’s FD algorithms respectively. The methodology was tested on three case groups of European sea bass (Dicentrarchus labrax, two of which were similar (C1 control and C2 tagged fish and very different from the third (C3, tagged fish submerged in methylmercury contaminated water. The results indicate that Shannon entropy and Katz-Castiglioni were the most sensitive algorithms and proved to be promising tools for the non-invasive identification and quantification of differences in fish responses. In conclusion, we believe that this methodology has the potential to be embedded in online/real time architecture for contaminant monitoring programs in the aquaculture industry.

  20. A model national emergency response plan for radiological accidents

    1993-09-01

    The IAEA has supported several projects for the development of a national response plan for radiological emergencies. As a results, the IAEA has developed a model National Emergency Response Plan for Radiological Accidents (RAD PLAN), particularly for countries that have no nuclear power plants. This plan can be adapted for use by countries interested in developing their own national radiological emergency response plan, and the IAEA will supply the latest version of the RAD PLAN on computer diskette upon request. 2 tabs