WorldWideScience

Sample records for model incorporating multiple

  1. INCORPORATING MULTIPLE OBJECTIVES IN PLANNING MODELS OF LOW-RESOURCE FARMERS

    OpenAIRE

    Flinn, John C.; Jayasuriya, Sisira; Knight, C. Gregory

    1980-01-01

    Linear goal programming provides a means of formally incorporating the multiple goals of a household into the analysis of farming systems. Using this approach, the set of plans which come as close as possible to achieving a set of desired goals under conditions of land and cash scarcity are derived for a Filipino tenant farmer. A challenge in making LGP models empirically operational is the accurate definition of the goals of the farm household being modelled.

  2. A hybrid health service accreditation program model incorporating mandated standards and continuous improvement: interview study of multiple stakeholders in Australian health care.

    Science.gov (United States)

    Greenfield, David; Hinchcliff, Reece; Hogden, Anne; Mumford, Virginia; Debono, Deborah; Pawsey, Marjorie; Westbrook, Johanna; Braithwaite, Jeffrey

    2016-07-01

    The study aim was to investigate the understandings and concerns of stakeholders regarding the evolution of health service accreditation programs in Australia. Stakeholder representatives from programs in the primary, acute and aged care sectors participated in semi-structured interviews. Across 2011-12 there were 47 group and individual interviews involving 258 participants. Interviews lasted, on average, 1 h, and were digitally recorded and transcribed. Transcriptions were analysed using textual referencing software. Four significant issues were considered to have directed the evolution of accreditation programs: altering underlying program philosophies; shifting of program content focus and details; different surveying expectations and experiences and the influence of external contextual factors upon accreditation programs. Three accreditation program models were noted by participants: regulatory compliance; continuous quality improvement and a hybrid model, incorporating elements of these two. Respondents noted the compatibility or incommensurability of the first two models. Participation in a program was reportedly experienced as ranging on a survey continuum from "malicious compliance" to "performance audits" to "quality improvement journeys". Wider contextual factors, in particular, political and community expectations, and associated media reporting, were considered significant influences on the operation and evolution of programs. A hybrid accreditation model was noted to have evolved. The hybrid model promotes minimum standards and continuous quality improvement, through examining the structure and processes of organisations and the outcomes of care. The hybrid model appears to be directing organisational and professional attention to enhance their safety cultures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Formatt: Correcting protein multiple structural alignments by incorporating sequence alignment

    Directory of Open Access Journals (Sweden)

    Daniels Noah M

    2012-10-01

    Full Text Available Abstract Background The quality of multiple protein structure alignments are usually computed and assessed based on geometric functions of the coordinates of the backbone atoms from the protein chains. These purely geometric methods do not utilize directly protein sequence similarity, and in fact, determining the proper way to incorporate sequence similarity measures into the construction and assessment of protein multiple structure alignments has proved surprisingly difficult. Results We present Formatt, a multiple structure alignment based on the Matt purely geometric multiple structure alignment program, that also takes into account sequence similarity when constructing alignments. We show that Formatt outperforms Matt and other popular structure alignment programs on the popular HOMSTRAD benchmark. For the SABMark twilight zone benchmark set that captures more remote homology, Formatt and Matt outperform other programs; depending on choice of embedded sequence aligner, Formatt produces either better sequence and structural alignments with a smaller core size than Matt, or similarly sized alignments with better sequence similarity, for a small cost in average RMSD. Conclusions Considering sequence information as well as purely geometric information seems to improve quality of multiple structure alignments, though defining what constitutes the best alignment when sequence and structural measures would suggest different alignments remains a difficult open question.

  4. Incorporating groundwater flow into the WEPP model

    Science.gov (United States)

    William Elliot; Erin Brooks; Tim Link; Sue Miller

    2010-01-01

    The water erosion prediction project (WEPP) model is a physically-based hydrology and erosion model. In recent years, the hydrology prediction within the model has been improved for forest watershed modeling by incorporating shallow lateral flow into watershed runoff prediction. This has greatly improved WEPP's hydrologic performance on small watersheds with...

  5. Effects of the 5E Instructional Model Incorporated with Multiple Levels of Representations on Thai Students with Different Levels of Cognitive Development

    Science.gov (United States)

    Wichaidit, Patcharee Rompayom; Wichaidit, Sittichai

    2016-01-01

    Learning chemistry may be difficult for students for several reasons, such as the abstract nature of many chemistry concepts and the fact that students may view chemistry as irrelevant to their everyday lives. Teaching chemistry in familiar contexts and the use of multiple representations are seen as effective approaches for enhancing students'…

  6. Incorporating interfacial phenomena in solidification models

    Science.gov (United States)

    Beckermann, Christoph; Wang, Chao Yang

    1994-01-01

    A general methodology is available for the incorporation of microscopic interfacial phenomena in macroscopic solidification models that include diffusion and convection. The method is derived from a formal averaging procedure and a multiphase approach, and relies on the presence of interfacial integrals in the macroscopic transport equations. In a wider engineering context, these techniques are not new, but their application in the analysis and modeling of solidification processes has largely been overlooked. This article describes the techniques and demonstrates their utility in two examples in which microscopic interfacial phenomena are of great importance.

  7. Incorporating neurophysiological concepts in mathematical thermoregulation models

    Science.gov (United States)

    Kingma, Boris R. M.; Vosselman, M. J.; Frijns, A. J. H.; van Steenhoven, A. A.; van Marken Lichtenbelt, W. D.

    2014-01-01

    Skin blood flow (SBF) is a key player in human thermoregulation during mild thermal challenges. Various numerical models of SBF regulation exist. However, none explicitly incorporates the neurophysiology of thermal reception. This study tested a new SBF model that is in line with experimental data on thermal reception and the neurophysiological pathways involved in thermoregulatory SBF control. Additionally, a numerical thermoregulation model was used as a platform to test the function of the neurophysiological SBF model for skin temperature simulation. The prediction-error of the SBF-model was quantified by root-mean-squared-residual (RMSR) between simulations and experimental measurement data. Measurement data consisted of SBF (abdomen, forearm, hand), core and skin temperature recordings of young males during three transient thermal challenges (1 development and 2 validation). Additionally, ThermoSEM, a thermoregulation model, was used to simulate body temperatures using the new neurophysiological SBF-model. The RMSR between simulated and measured mean skin temperature was used to validate the model. The neurophysiological model predicted SBF with an accuracy of RMSR human thermoregulation models can be equipped with SBF control functions that are based on neurophysiology without loss of performance. The neurophysiological approach in modelling thermoregulation is favourable over engineering approaches because it is more in line with the underlying physiology.

  8. New MoM code incorporating multiple domain basis functions

    CSIR Research Space (South Africa)

    Lysko, AA

    2011-08-01

    Full Text Available piecewise linear approximation of geometry. This often leads to an unnecessarily great number of unknowns used to model relatively small loop and spiral antennas, coils and other curved structures. This is because the program creates a dense mesh... to accelerate computation of the elements of the impedance matrix and showed acceleration factor exceeding an order of magnitude, subject to a high accuracy requirement. 3. On Code Functionality and Application Results The package of programs was written...

  9. Incorporating damage mechanics into explosion simulation models

    International Nuclear Information System (INIS)

    Sammis, C.G.

    1993-01-01

    The source region of an underground explosion is commonly modeled as a nested series of shells. In the innermost open-quotes hydrodynamic regimeclose quotes pressures and temperatures are sufficiently high that the rock deforms as a fluid and may be described using a PVT equation of state. Just beyond the hydrodynamic regime, is the open-quotes non-linear regimeclose quotes in which the rock has shear strength but the deformation is nonlinear. This regime extends out to the open-quotes elastic radiusclose quotes beyond which the deformation is linear. In this paper, we develop a model for the non-linear regime in crystalline source rock where the nonlinearity is mostly due to fractures. We divide the non-linear regime into a open-quotes damage regimeclose quotes in which the stresses are sufficiently high to nucleate new fractures from preexisting ones and a open-quotes crack-slidingclose quotes regime where motion on preexisting cracks produces amplitude dependent attenuation and other non-linear effects, but no new cracks are nucleated. The boundary between these two regimes is called the open-quotes damage radius.close quotes The micromechanical damage mechanics recently developed by Ashby and Sammis (1990) is used to write an analytic expression for the damage radius in terms of the initial fracture spectrum of the source rock, and to develop an algorithm which may be used to incorporate damage mechanics into computer source models for the damage regime. Effects of water saturation and loading rate are also discussed

  10. A new achiral reagent for the incorporation of multiple amino groups into oligonucleotides

    DEFF Research Database (Denmark)

    Behrens, Carsten; Petersen, Kenneth H.; Egholm, Michael

    1995-01-01

    The synthesis of a new functionalized achiral linker reagent (10) for the incorporation of multiple primary amino groups into oligonucleotides is described. The linker reagent is compatible with conventional DNA-synthesis following the phosphoramidite methodology, and the linker can be incorporated...

  11. Incorporating direct marketing activity into latent attrition models

    NARCIS (Netherlands)

    Schweidel, David A.; Knox, George

    2013-01-01

    When defection is unobserved, latent attrition models provide useful insights about customer behavior and accurate forecasts of customer value. Yet extant models ignore direct marketing efforts. Response models incorporate the effects of direct marketing, but because they ignore latent attrition,

  12. A mathematical model for incorporating biofeedback into human postural control

    Directory of Open Access Journals (Sweden)

    Ersal Tulga

    2013-02-01

    Full Text Available Abstract Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1 to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2 to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional

  13. A mathematical model for incorporating biofeedback into human postural control

    Science.gov (United States)

    2013-01-01

    Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1) to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2) to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP) to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional torque to the postural

  14. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  15. Geometrical model of multiple production

    International Nuclear Information System (INIS)

    Chikovani, Z.E.; Jenkovszky, L.L.; Kvaratshelia, T.M.; Struminskij, B.V.

    1988-01-01

    The relation between geometrical and KNO-scaling and their violation is studied in a geometrical model of multiple production of hadrons. Predictions concerning the behaviour of correlation coefficients at future accelerators are given

  16. A Financial Market Model Incorporating Herd Behaviour.

    Science.gov (United States)

    Wray, Christopher M; Bishop, Steven R

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market

  17. A Financial Market Model Incorporating Herd Behaviour.

    Directory of Open Access Journals (Sweden)

    Christopher M Wray

    Full Text Available Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space, and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space, numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock

  18. A Financial Market Model Incorporating Herd Behaviour

    Science.gov (United States)

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents’ accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents’ accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the

  19. Longitudinal comparative evaluation of the equivalence of an integrated peer-support and clinical staffing model for residential mental health rehabilitation: a mixed methods protocol incorporating multiple stakeholder perspectives.

    Science.gov (United States)

    Parker, Stephen; Dark, Frances; Newman, Ellie; Korman, Nicole; Meurk, Carla; Siskind, Dan; Harris, Meredith

    2016-06-02

    A novel staffing model integrating peer support workers and clinical staff within a unified team is being trialled at community based residential rehabilitation units in Australia. A mixed-methods protocol for the longitudinal evaluation of the outcomes, expectations and experiences of care by consumers and staff under this staffing model in two units will be compared to one unit operating a traditional clinical staffing. The study is unique with regards to the context, the longitudinal approach and consideration of multiple stakeholder perspectives. The longitudinal mixed methods design integrates a quantitative evaluation of the outcomes of care for consumers at three residential rehabilitation units with an applied qualitative research methodology. The quantitative component utilizes a prospective cohort design to explore whether equivalent outcomes are achieved through engagement at residential rehabilitation units operating integrated and clinical staffing models. Comparative data will be available from the time of admission, discharge and 12-month period post-discharge from the units. Additionally, retrospective data for the 12-month period prior to admission will be utilized to consider changes in functioning pre and post engagement with residential rehabilitation care. The primary outcome will be change in psychosocial functioning, assessed using the total score on the Health of the Nation Outcome Scales (HoNOS). Planned secondary outcomes will include changes in symptomatology, disability, recovery orientation, carer quality of life, emergency department presentations, psychiatric inpatient bed days, and psychological distress and wellbeing. Planned analyses will include: cohort description; hierarchical linear regression modelling of the predictors of change in HoNOS following CCU care; and descriptive comparisons of the costs associated with the two staffing models. The qualitative component utilizes a pragmatic approach to grounded theory, with

  20. False discovery rate control incorporating phylogenetic tree increases detection power in microbiome-wide multiple testing.

    Science.gov (United States)

    Xiao, Jian; Cao, Hongyuan; Chen, Jun

    2017-09-15

    Next generation sequencing technologies have enabled the study of the human microbiome through direct sequencing of microbial DNA, resulting in an enormous amount of microbiome sequencing data. One unique characteristic of microbiome data is the phylogenetic tree that relates all the bacterial species. Closely related bacterial species have a tendency to exhibit a similar relationship with the environment or disease. Thus, incorporating the phylogenetic tree information can potentially improve the detection power for microbiome-wide association studies, where hundreds or thousands of tests are conducted simultaneously to identify bacterial species associated with a phenotype of interest. Despite much progress in multiple testing procedures such as false discovery rate (FDR) control, methods that take into account the phylogenetic tree are largely limited. We propose a new FDR control procedure that incorporates the prior structure information and apply it to microbiome data. The proposed procedure is based on a hierarchical model, where a structure-based prior distribution is designed to utilize the phylogenetic tree. By borrowing information from neighboring bacterial species, we are able to improve the statistical power of detecting associated bacterial species while controlling the FDR at desired levels. When the phylogenetic tree is mis-specified or non-informative, our procedure achieves a similar power as traditional procedures that do not take into account the tree structure. We demonstrate the performance of our method through extensive simulations and real microbiome datasets. We identified far more alcohol-drinking associated bacterial species than traditional methods. R package StructFDR is available from CRAN. chen.jun2@mayo.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  1. Incorporating territory compression into population models

    NARCIS (Netherlands)

    Ridley, J; Komdeur, J; Sutherland, WJ; Sutherland, William J.

    The ideal despotic distribution, whereby the lifetime reproductive success a territory's owner achieves is unaffected by population density, is a mainstay of behaviour-based population models. We show that the population dynamics of an island population of Seychelles warblers (Acrocephalus

  2. Parallel Beam-Beam Simulation Incorporating Multiple Bunches and Multiple Interaction Regions

    CERN Document Server

    Jones, F W; Pieloni, T

    2007-01-01

    The simulation code COMBI has been developed to enable the study of coherent beam-beam effects in the full collision scenario of the LHC, with multiple bunches interacting at multiple crossing points over many turns. The program structure and input are conceived in a general way which allows arbitrary numbers and placements of bunches and interaction points (IP's), together with procedural options for head-on and parasitic collisions (in the strong-strong sense), beam transport, statistics gathering, harmonic analysis, and periodic output of simulation data. The scale of this problem, once we go beyond the simplest case of a pair of bunches interacting once per turn, quickly escalates into the parallel computing arena, and herein we will describe the construction of an MPI-based version of COMBI able to utilize arbitrary numbers of processors to support efficient calculation of multi-bunch multi-IP interactions and transport. Implementing the parallel version did not require extensive disruption of the basic ...

  3. Incorporation Between AHP and N-TSP for Plant Surveillance Routing with Multiple Constraints

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho

    2002-01-01

    This paper observed plant on-line surveillance routing for maintenance management with multiple constraints using TSP (Traveling Salesman Problem). In the research N-TSP (nomadic TSP) type was used. In this case, on-line surveillance could be implemented on moving robot. Route for preventive maintenance management was observed sequentially per stage using multiple constraints (a) distance between components, and (b) failure probability of components using AHP (Analytical Hierarchy Process). Simulation was observed utilizing DURESS as a complex system. The simulation result showed that the route with single constraint distance between components represents the sequence of 1 - 6 - 2 - 4 - 3 - 5. Routing for DURESS with multiple constraints using incorporation of AHP and TSP showed that the first priority in the route is flow sensor FB2 with the value of comparation of 0.1042. The next priority is sequentially FB1, FA2, FA1, FA, FB, VB, VA, VA1, VA2, VB1, VB2, pump B, pump A, FR1, FR2, reservoir 2, and reservoir 1. Numerical experiment obtained that the incorporation between AHP and N-TSP has successfully constructed the surveillance routing with multiple constraints. (author)

  4. True dose from incorporated activities. Models for internal dosimetry

    International Nuclear Information System (INIS)

    Breustedt, B.; Eschner, W.; Nosske, D.

    2012-01-01

    The assessment of doses after incorporation of radionuclides cannot use direct measurements of the doses, as for example dosimetry in external radiation fields. The only observables are activities in the body or in excretions. Models are used to calculate the doses based on the measured activities. The incorporated activities and the resulting doses can vary by more than seven orders of magnitude between occupational and medical exposures. Nevertheless the models and calculations applied in both cases are similar. Since the models for the different applications have been developed independently by ICRP and MIRD different terminologies have been used. A unified terminology is being developed. (orig.)

  5. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...

  6. A New Achiral Linker Reagent for the Incorporation of Multiple Amino Groups Into Oligonucleotides

    DEFF Research Database (Denmark)

    1997-01-01

    The present invention relates to a new functionalized achiral linker reagent for incorporating multiple primary amino groups or reporter groups into oligonucleotides following the phosphoramidite methodology. It is possible to substitute any ribodeoxynucleotide, deoxynucleotide, or nucleotide......-oxyl-2,2,5,5-tetramethylpyrrolidine), TEMPO (N-oxyl-2,2,6,6-tetramethylpiperidine), dinitrophenyl, texas red, tetramethyl rhodamine, 7-nitrobenzo-2-oxa-1-diazole (NBD), or pyrene. The present invention also relates to a solid phase support, e.g. a Controlled Pore Glass (CPG), immobilized linker reagent...

  7. Incorporating parametric uncertainty into population viability analysis models

    Science.gov (United States)

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  8. Incorporating the life course model into MCH nutrition leadership education and training programs.

    Science.gov (United States)

    Haughton, Betsy; Eppig, Kristen; Looney, Shannon M; Cunningham-Sabo, Leslie; Spear, Bonnie A; Spence, Marsha; Stang, Jamie S

    2013-01-01

    Life course perspective, social determinants of health, and health equity have been combined into one comprehensive model, the life course model (LCM), for strategic planning by US Health Resources and Services Administration's Maternal and Child Health Bureau. The purpose of this project was to describe a faculty development process; identify strategies for incorporation of the LCM into nutrition leadership education and training at the graduate and professional levels; and suggest broader implications for training, research, and practice. Nineteen representatives from 6 MCHB-funded nutrition leadership education and training programs and 10 federal partners participated in a one-day session that began with an overview of the models and concluded with guided small group discussions on how to incorporate them into maternal and child health (MCH) leadership training using obesity as an example. Written notes from group discussions were compiled and coded emergently. Content analysis determined the most salient themes about incorporating the models into training. Four major LCM-related themes emerged, three of which were about training: (1) incorporation by training grants through LCM-framed coursework and experiences for trainees, and similarly framed continuing education and skills development for professionals; (2) incorporation through collaboration with other training programs and state and community partners, and through advocacy; and (3) incorporation by others at the federal and local levels through policy, political, and prevention efforts. The fourth theme focused on anticipated challenges of incorporating the model in training. Multiple methods for incorporating the LCM into MCH training and practice are warranted. Challenges to incorporating include the need for research and related policy development.

  9. "Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Nibbs, Faith G.

    2007-08-24

    While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.

  10. The relationship between alcohol taxes and binge drinking: evaluating new tax measures incorporating multiple tax and beverage types.

    Science.gov (United States)

    Xuan, Ziming; Chaloupka, Frank J; Blanchette, Jason G; Nguyen, Thien H; Heeren, Timothy C; Nelson, Toben F; Naimi, Timothy S

    2015-03-01

    U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (-0.09, P = 0.02 versus -0.005, P = 0.63) and price elasticity (-1.40, P tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P = 0.11), while the R-squares for models rely only on volume-based tax declined (P tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. © 2014 Society for the Study of Addiction.

  11. The relationship between alcohol taxes and binge drinking: evaluating new tax measures incorporating multiple tax and beverage types

    Science.gov (United States)

    Xuan, Ziming; Chaloupka, Frank J.; Blanchette, Jason G.; Nguyen, Thien H.; Heeren, Timothy C.; Nelson, Toben F.; Naimi, Timothy S.

    2015-01-01

    Aims U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Design Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. Findings In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (−0.09, P=0.02 versus −0.005, P=0.63) and price elasticity (−1.40, Ptax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P=0.11), while the R-squares for models rely only on volume-based tax declined (Ptax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. PMID:25428795

  12. Incorporating nitrogen fixing cyanobacteria in the global biogeochemical model HAMOCC

    Science.gov (United States)

    Paulsen, Hanna; Ilyina, Tatiana; Six, Katharina

    2015-04-01

    Nitrogen fixation by marine diazotrophs plays a fundamental role in the oceanic nitrogen and carbon cycle as it provides a major source of 'new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Since most global biogeochemical models include nitrogen fixation only diagnostically, they are not able to capture its spatial pattern sufficiently. Here we present the incorporation of an explicit, dynamic representation of diazotrophic cyanobacteria and the corresponding nitrogen fixation in the global ocean biogeochemical model HAMOCC (Hamburg Ocean Carbon Cycle model), which is part of the Max Planck Institute for Meteorology Earth system model (MPI-ESM). The parameterization of the diazotrophic growth is thereby based on available knowledge about the cyanobacterium Trichodesmium spp., which is considered as the most significant pelagic nitrogen fixer. Evaluation against observations shows that the model successfully reproduces the main spatial distribution of cyanobacteria and nitrogen fixation, covering large parts of the tropical and subtropical oceans. Besides the role of cyanobacteria in marine biogeochemical cycles, their capacity to form extensive surface blooms induces a number of bio-physical feedback mechanisms in the Earth system. The processes driving these interactions, which are related to the alteration of heat absorption, surface albedo and momentum input by wind, are incorporated in the biogeochemical and physical model of the MPI-ESM in order to investigate their impacts on a global scale. First preliminary results will be shown.

  13. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.; Smith, C.L.; Rasmuson, D.M.

    1994-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  14. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  15. Analysis of Product Sampling for New Product Diffusion Incorporating Multiple-Unit Ownership

    Directory of Open Access Journals (Sweden)

    Zhineng Hu

    2014-01-01

    Full Text Available Multiple-unit ownership of nondurable products is an important component of sales in many product categories. Based on the Bass model, this paper develops a new model considering the multiple-unit adoptions as a diffusion process under the influence of product sampling. Though the analysis aims to determine the optimal dynamic sampling effort for a firm and the results demonstrate that experience sampling can accelerate the diffusion process, the best time to send free samples is just before the product being launched. Multiple-unit purchasing behavior can increase sales to make more profit for a firm, and it needs more samples to make the product known much better. The local sensitivity analysis shows that the increase of both external coefficients and internal coefficients has a negative influence on the sampling level, but the internal influence on the subsequent multiple-unit adoptions has little significant influence on the sampling. Using the logistic regression along with linear regression, the global sensitivity analysis gives a whole analysis of the interaction of all factors, which manifests the external influence and multiunit purchase rate are two most important factors to influence the sampling level and net present value of the new product, and presents a two-stage method to determine the sampling level.

  16. Do Knowledge-Component Models Need to Incorporate Representational Competencies?

    Science.gov (United States)

    Rau, Martina Angela

    2017-01-01

    Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…

  17. A tactical supply chain planning model with multiple flexibility options

    DEFF Research Database (Denmark)

    Esmaeilikia, Masoud; Fahimnia, Behnam; Sarkis, Joeseph

    2016-01-01

    Supply chain flexibility is widely recognized as an approach to manage uncertainty. Uncertainty in the supply chain may arise from a number of sources such as demand and supply interruptions and lead time variability. A tactical supply chain planning model with multiple flexibility options...... incorporated in sourcing, manufacturing and logistics functions can be used for the analysis of flexibility adjustment in an existing supply chain. This paper develops such a tactical supply chain planning model incorporating a realistic range of flexibility options. A novel solution method is designed...

  18. Incorporating model parameter uncertainty into inverse treatment planning

    International Nuclear Information System (INIS)

    Lian Jun; Xing Lei

    2004-01-01

    Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment

  19. Incorporation of essential oil in alginate microparticles by multiple emulsion/ionic gelation process.

    Science.gov (United States)

    Hosseini, Seyede Marzieh; Hosseini, Hedayat; Mohammadifar, Mohammad Amin; Mortazavian, Amir Mohammad; Mohammadi, Abdorreza; Khosravi-Darani, Kianoosh; Shojaee-Aliabadi, Saeedeh; Dehghan, Solmaz; Khaksar, Ramin

    2013-11-01

    In this study, an o/w/o multiple emulsion/ionic gelation method was developed for production of alginate microparticles loaded with Satureja hortensis essential oil (SEO). It was found that the essential oil concentration has significant influence on encapsulation efficiency (EE), loading capacity (LC) and size of microparticles. The values of EE, LC and particle mean diameter were about 52-66%, 20-26%, and 47-117 μm, respectively, when the initial SEO content was 1-3% (v/v) .The essential oil-loaded microparticles were porous, as displayed by scanning electron micrograph. The presence of SEO in alginate microparticles was confirmed by Fourier transform-infrared (FT-IR) spectroscopy and differential scanning calorimetry (DSC) analyses. SEO-loaded microparticles showed good antioxidant (with DPPH radical scavenging activity of 40.7-73.5%) and antibacterial properties; this effect was greatly improved when the concentration of SEO was 3% (v/v). S. aureus was found to be the most sensitive bacterium to SEO and showed a highest inhibition zone of 304.37 mm(2) in the microparticles incorporated with 3% (v/v) SEO. In vitro release studies showed an initial burst release and followed by a slow release. In addition, the release of SEO from the microparticles followed Fickian diffusion with acceptable release. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. A statistical model for aggregating judgments by incorporating peer predictions

    OpenAIRE

    McCoy, John; Prelec, Drazen

    2017-01-01

    We propose a probabilistic model to aggregate the answers of respondents answering multiple-choice questions. The model does not assume that everyone has access to the same information, and so does not assume that the consensus answer is correct. Instead, it infers the most probable world state, even if only a minority vote for it. Each respondent is modeled as receiving a signal contingent on the actual world state, and as using this signal to both determine their own answer and predict the ...

  1. Two approaches to incorporate clinical data uncertainty into multiple criteria decision analysis for benefit-risk assessment of medicinal products.

    Science.gov (United States)

    Wen, Shihua; Zhang, Lanju; Yang, Bo

    2014-07-01

    The Problem formulation, Objectives, Alternatives, Consequences, Trade-offs, Uncertainties, Risk attitude, and Linked decisions (PrOACT-URL) framework and multiple criteria decision analysis (MCDA) have been recommended by the European Medicines Agency for structured benefit-risk assessment of medicinal products undergoing regulatory review. The objective of this article was to provide solutions to incorporate the uncertainty from clinical data into the MCDA model when evaluating the overall benefit-risk profiles among different treatment options. Two statistical approaches, the δ-method approach and the Monte-Carlo approach, were proposed to construct the confidence interval of the overall benefit-risk score from the MCDA model as well as other probabilistic measures for comparing the benefit-risk profiles between treatment options. Both approaches can incorporate the correlation structure between clinical parameters (criteria) in the MCDA model and are straightforward to implement. The two proposed approaches were applied to a case study to evaluate the benefit-risk profile of an add-on therapy for rheumatoid arthritis (drug X) relative to placebo. It demonstrated a straightforward way to quantify the impact of the uncertainty from clinical data to the benefit-risk assessment and enabled statistical inference on evaluating the overall benefit-risk profiles among different treatment options. The δ-method approach provides a closed form to quantify the variability of the overall benefit-risk score in the MCDA model, whereas the Monte-Carlo approach is more computationally intensive but can yield its true sampling distribution for statistical inference. The obtained confidence intervals and other probabilistic measures from the two approaches enhance the benefit-risk decision making of medicinal products. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  3. Modeling Multiple Causes of Carcinogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Jones, T D

    1999-01-24

    An array of epidemiological results and databases on test animal indicate that risk of cancer and atherosclerosis can be up- or down-regulated by diet through a range of 200%. Other factors contribute incrementally and include the natural terrestrial environment and various human activities that jointly produce complex exposures to endotoxin-producing microorganisms, ionizing radiations, and chemicals. Ordinary personal habits and simple physical irritants have been demonstrated to affect the immune response and risk of disease. There tends to be poor statistical correlation of long-term risk with single agent exposures incurred throughout working careers. However, Agency recommendations for control of hazardous exposures to humans has been substance-specific instead of contextually realistic even though there is consistent evidence for common mechanisms of toxicological and carcinogenic action. That behavior seems to be best explained by molecular stresses from cellular oxygen metabolism and phagocytosis of antigenic invasion as well as breakdown of normal metabolic compounds associated with homeostatic- and injury-related renewal of cells. There is continually mounting evidence that marrow stroma, comprised largely of monocyte-macrophages and fibroblasts, is important to phagocytic and cytokinetic response, but the complex action of the immune process is difficult to infer from first-principle logic or biomarkers of toxic injury. The many diverse database studies all seem to implicate two important processes, i.e., the univalent reduction of molecular oxygen and breakdown of aginuine, an amino acid, by hydrolysis or digestion of protein which is attendant to normal antigen-antibody action. This behavior indicates that protection guidelines and risk coefficients should be context dependent to include reference considerations of the composite action of parameters that mediate oxygen metabolism. A logic of this type permits the realistic common-scale modeling of

  4. The crowded sea: incorporating multiple marine activities in conservation plans can significantly alter spatial priorities.

    Directory of Open Access Journals (Sweden)

    Tessa Mazor

    Full Text Available Successful implementation of marine conservation plans is largely inhibited by inadequate consideration of the broader social and economic context within which conservation operates. Marine waters and their biodiversity are shared by a host of stakeholders, such as commercial fishers, recreational users and offshore developers. Hence, to improve implementation success of conservation plans, we must incorporate other marine activities while explicitly examining trade-offs that may be required. In this study, we test how the inclusion of multiple marine activities can shape conservation plans. We used the entire Mediterranean territorial waters of Israel as a case study to compare four planning scenarios with increasing levels of complexity, where additional zones, threats and activities were added (e.g., commercial fisheries, hydrocarbon exploration interests, aquaculture, and shipping lanes. We applied the marine zoning decision support tool Marxan to each planning scenario and tested a the ability of each scenario to reach biodiversity targets, b the change in opportunity cost and c the alteration of spatial conservation priorities. We found that by including increasing numbers of marine activities and zones in the planning process, greater compromises are required to reach conservation objectives. Complex plans with more activities incurred greater opportunity cost and did not reach biodiversity targets as easily as simplified plans with less marine activities. We discovered that including hydrocarbon data in the planning process significantly alters spatial priorities. For the territorial waters of Israel we found that in order to protect at least 10% of the range of 166 marine biodiversity features there would be a loss of ∼15% of annual commercial fishery revenue and ∼5% of prospective hydrocarbon revenue. This case study follows an illustrated framework for adopting a transparent systematic process to balance biodiversity goals and

  5. Incorporating modelled subglacial hydrology into inversions for basal drag

    Directory of Open Access Journals (Sweden)

    C. P. Koziol

    2017-12-01

    Full Text Available A key challenge in modelling coupled ice-flow–subglacial hydrology is initializing the state and parameters of the system. We address this problem by presenting a workflow for initializing these values at the start of a summer melt season. The workflow depends on running a subglacial hydrology model for the winter season, when the system is not forced by meltwater inputs, and ice velocities can be assumed constant. Key parameters of the winter run of the subglacial hydrology model are determined from an initial inversion for basal drag using a linear sliding law. The state of the subglacial hydrology model at the end of winter is incorporated into an inversion of basal drag using a non-linear sliding law which is a function of water pressure. We demonstrate this procedure in the Russell Glacier area and compare the output of the linear sliding law with two non-linear sliding laws. Additionally, we compare the modelled winter hydrological state to radar observations and find that it is in line with summer rather than winter observations.

  6. An electricity generation planning model incorporating demand response

    International Nuclear Information System (INIS)

    Choi, Dong Gu; Thomas, Valerie M.

    2012-01-01

    Energy policies that aim to reduce carbon emissions and change the mix of electricity generation sources, such as carbon cap-and-trade systems and renewable electricity standards, can affect not only the source of electricity generation, but also the price of electricity and, consequently, demand. We develop an optimization model to determine the lowest cost investment and operation plan for the generating capacity of an electric power system. The model incorporates demand response to price change. In a case study for a U.S. state, we show the price, demand, and generation mix implications of a renewable electricity standard, and of a carbon cap-and-trade policy with and without initial free allocation of carbon allowances. This study shows that both the demand moderating effects and the generation mix changing effects of the policies can be the sources of carbon emissions reductions, and also shows that the share of the sources could differ with different policy designs. The case study provides different results when demand elasticity is excluded, underscoring the importance of incorporating demand response in the evaluation of electricity generation policies. - Highlights: ► We develop an electric power system optimization model including demand elasticity. ► Both renewable electricity and carbon cap-and-trade policies can moderate demand. ► Both policies affect the generation mix, price, and demand for electricity. ► Moderated demand can be a significant source of carbon emission reduction. ► For cap-and-trade policies, initial free allowances change outcomes significantly.

  7. Tantalum strength model incorporating temperature, strain rate and pressure

    Science.gov (United States)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Incorporation of chemical kinetic models into process control

    International Nuclear Information System (INIS)

    Herget, C.J.; Frazer, J.W.

    1981-01-01

    An important consideration in chemical process control is to determine the precise rationing of reactant streams, particularly when a large time delay exists between the mixing of the reactants and the measurement of the product. In this paper, a method is described for incorporating chemical kinetic models into the control strategy in order to achieve optimum operating conditions. The system is first characterized by determining a reaction rate surface as a function of all input reactant concentrations over a feasible range. A nonlinear constrained optimization program is then used to determine the combination of reactants which produces the specified yield at minimum cost. This operating condition is then used to establish the nominal concentrations of the reactants. The actual operation is determined through a feedback control system employing a Smith predictor. The method is demonstrated on a laboratory bench scale enzyme reactor

  9. Digital terrain model generalization incorporating scale, semantic and cognitive constraints

    Science.gov (United States)

    Partsinevelos, Panagiotis; Papadogiorgaki, Maria

    2014-05-01

    Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final

  10. A stochastic MILP energy planning model incorporating power market dynamics

    International Nuclear Information System (INIS)

    Koltsaklis, Nikolaos E.; Nazos, Konstantinos

    2017-01-01

    Highlights: •Stochastic MILP model for the optimal energy planning of a power system. •Power market dynamics (offers/bids) are incorporated in the proposed model. •Monte Carlo method for capturing the uncertainty of some key parameters. •Analytical supply cost composition per power producer and activity. •Clean dark and spark spreads are calculated for each power unit. -- Abstract: This paper presents an optimization-based methodological approach to address the problem of the optimal planning of a power system at an annual level in competitive and uncertain power markets. More specifically, a stochastic mixed integer linear programming model (MILP) has been developed, combining advanced optimization techniques with Monte Carlo method in order to deal with uncertainty issues. The main focus of the proposed framework is the dynamic formulation of the strategy followed by all market participants in volatile market conditions, as well as detailed economic assessment of the power system’s operation. The applicability of the proposed approach has been tested on a real case study of the interconnected Greek power system, quantifying in detail all the relevant technical and economic aspects of the system’s operation. The proposed work identifies in the form of probability distributions the optimal power generation mix, electricity trade at a regional level, carbon footprint, as well as detailed total supply cost composition, according to the assumed market structure. The paper demonstrates that the proposed optimization approach is able to provide important insights into the appropriate energy strategies designed by market participants, as well as on the strategic long-term decisions to be made by investors and/or policy makers at a national and/or regional level, underscoring potential risks and providing appropriate price signals on critical energy projects under real market operating conditions.

  11. Incorporating spatial autocorrelation into species distribution models alters forecasts of climate-mediated range shifts.

    Science.gov (United States)

    Crase, Beth; Liedloff, Adam; Vesk, Peter A; Fukuda, Yusuke; Wintle, Brendan A

    2014-08-01

    Species distribution models (SDMs) are widely used to forecast changes in the spatial distributions of species and communities in response to climate change. However, spatial autocorrelation (SA) is rarely accounted for in these models, despite its ubiquity in broad-scale ecological data. While spatial autocorrelation in model residuals is known to result in biased parameter estimates and the inflation of type I errors, the influence of unmodeled SA on species' range forecasts is poorly understood. Here we quantify how accounting for SA in SDMs influences the magnitude of range shift forecasts produced by SDMs for multiple climate change scenarios. SDMs were fitted to simulated data with a known autocorrelation structure, and to field observations of three mangrove communities from northern Australia displaying strong spatial autocorrelation. Three modeling approaches were implemented: environment-only models (most frequently applied in species' range forecasts), and two approaches that incorporate SA; autologistic models and residuals autocovariate (RAC) models. Differences in forecasts among modeling approaches and climate scenarios were quantified. While all model predictions at the current time closely matched that of the actual current distribution of the mangrove communities, under the climate change scenarios environment-only models forecast substantially greater range shifts than models incorporating SA. Furthermore, the magnitude of these differences intensified with increasing increments of climate change across the scenarios. When models do not account for SA, forecasts of species' range shifts indicate more extreme impacts of climate change, compared to models that explicitly account for SA. Therefore, where biological or population processes induce substantial autocorrelation in the distribution of organisms, and this is not modeled, model predictions will be inaccurate. These results have global importance for conservation efforts as inaccurate

  12. Incorporating Context Dependency of Species Interactions in Species Distribution Models.

    Science.gov (United States)

    Lany, Nina K; Zarnetske, Phoebe L; Gouhier, Tarik C; Menge, Bruce A

    2017-07-01

    Species distribution models typically use correlative approaches that characterize the species-environment relationship using occurrence or abundance data for a single species. However, species distributions are determined by both abiotic conditions and biotic interactions with other species in the community. Therefore, climate change is expected to impact species through direct effects on their physiology and indirect effects propagated through their resources, predators, competitors, or mutualists. Furthermore, the sign and strength of species interactions can change according to abiotic conditions, resulting in context-dependent species interactions that may change across space or with climate change. Here, we incorporated the context dependency of species interactions into a dynamic species distribution model. We developed a multi-species model that uses a time-series of observational survey data to evaluate how abiotic conditions and species interactions affect the dynamics of three rocky intertidal species. The model further distinguishes between the direct effects of abiotic conditions on abundance and the indirect effects propagated through interactions with other species. We apply the model to keystone predation by the sea star Pisaster ochraceus on the mussel Mytilus californianus and the barnacle Balanus glandula in the rocky intertidal zone of the Pacific coast, USA. Our method indicated that biotic interactions between P. ochraceus and B. glandula affected B. glandula dynamics across >1000 km of coastline. Consistent with patterns from keystone predation, the growth rate of B. glandula varied according to the abundance of P. ochraceus in the previous year. The data and the model did not indicate that the strength of keystone predation by P. ochraceus varied with a mean annual upwelling index. Balanus glandula cover increased following years with high phytoplankton abundance measured as mean annual chlorophyll-a. M. californianus exhibited the same

  13. Model for Volatile Incorporation into Soils and Dust on Mars

    Science.gov (United States)

    Clark, B. C.; Yen, A.

    2006-12-01

    Martian soils with high content of compounds of sulfur and chlorine are ubiquitous on Mars, having been found at all five landing sites. Sulfate and chloride salts are implicated by a variety of evidence, but few conclusive specific identifications have been made. Discovery of jarosite and Mg-Ca sulfates in outcrops at Meridiani Planum (MER mission) and regional-scale beds of kieserite and gypsum (Mars Express mission) notwithstanding, the sulfates in soils are uncertain. Chlorides or other Cl-containing minerals have not been uniquely identified directly by any method. Viking and Pathfinder missions found trends in the elemental analytical data consistent with MgSO4, but Viking results are biased by duricrust samples and Pathfinder by soil contamination of rock surfaces. The Mars Exploration Rovers (MER) missions have taken extensive data on soils with no confirmation of trends implicating any particular cation. In our model of martian dust and soil, the S and Cl are initially incorporated by condensation or chemisorption on grains directly from gas phase molecules in the atmosphere. It is shown by modeling that the coatings thus formed cannot quantitatively explain the apparent elemental composition of these materials, and therefore involve the migration of ions and formation of microscopic weathering rinds. Original cation inventories of unweathered particles are isochemically conserved. Exposed rock surfaces should also have micro rinds, depending upon the length of time of exposure. Martian soils may therefore have unusual chemical properties when interacting with aqueous layers or infused fluids. Potential ramifications to the quantitative accuracy of x-ray fluorescence and Moessbauer spectroscopy on unprocessed samples are also assessed.

  14. Multiplicity Control in Structural Equation Modeling

    Science.gov (United States)

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  15. Medicare capitation model, functional status, and multiple comorbidities: model accuracy

    Science.gov (United States)

    Noyes, Katia; Liu, Hangsheng; Temkin-Greener, Helena

    2012-01-01

    Objective This study examined financial implications of CMS-Hierarchical Condition Categories (HCC) risk-adjustment model on Medicare payments for individuals with comorbid chronic conditions. Study Design The study used 1992-2000 data from the Medicare Current Beneficiary Survey and corresponding Medicare claims. The pairs of comorbidities were formed based on the prior evidence about possible synergy between these conditions and activities of daily living (ADL) deficiencies and included heart disease and cancer, lung disease and cancer, stroke and hypertension, stroke and arthritis, congestive heart failure (CHF) and osteoporosis, diabetes and coronary artery disease, CHF and dementia. Methods For each beneficiary, we calculated the actual Medicare cost ratio as the ratio of the individual’s annualized costs to the mean annual Medicare cost of all people in the study. The actual Medicare cost ratios, by ADLs, were compared to the HCC ratios under the CMS-HCC payment model. Using multivariate regression models, we tested whether having the identified pairs of comorbidities affects the accuracy of CMS-HCC model predictions. Results The CMS-HCC model underpredicted Medicare capitation payments for patients with hypertension, lung disease, congestive heart failure and dementia. The difference between the actual costs and predicted payments was partially explained by beneficiary functional status and less than optimal adjustment for these chronic conditions. Conclusions Information about beneficiary functional status should be incorporated in reimbursement models since underpaying providers for caring for population with multiple comorbidities may provide severe disincentives for managed care plans to enroll such individuals and to appropriately manage their complex and costly conditions. PMID:18837646

  16. Incorporating Multiple Energy Relay Dyes in Liquid Dye-Sensitized Solar Cells

    KAUST Repository

    Yum, Jun-Ho; Hardin, Brian E.; Hoke, Eric T.; Baranoff, Etienne; Zakeeruddin, Shaik M.; Nazeeruddin, Mohammad K.; Torres, Tomas; McGehee, Michael D.; Grä tzel, Michael

    2011-01-01

    Panchromatic response is essential to increase the light-harvesting efficiency in solar conversion systems. Herein we show increased light harvesting from using multiple energy relay dyes inside dye-sensitized solar cells. Additional photoresponse

  17. A model for diagnosing and explaining multiple disorders.

    Science.gov (United States)

    Jamieson, P W

    1991-08-01

    The ability to diagnose multiple interacting disorders and explain them in a coherent causal framework has only partially been achieved in medical expert systems. This paper proposes a causal model for diagnosing and explaining multiple disorders whose key elements are: physician-directed hypotheses generation, object-oriented knowledge representation, and novel explanation heuristics. The heuristics modify and link the explanations to make the physician aware of diagnostic complexities. A computer program incorporating the model currently is in use for diagnosing peripheral nerve and muscle disorders. The program successfully diagnoses and explains interactions between diseases in terms of underlying pathophysiologic concepts. The model offers a new architecture for medical domains where reasoning from first principles is difficult but explanation of disease interactions is crucial for the system's operation.

  18. Design of Xen Hybrid Multiple Police Model

    Science.gov (United States)

    Sun, Lei; Lin, Renhao; Zhu, Xianwei

    2017-10-01

    Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.

  19. A multiple-location model for natural gas forward curves

    International Nuclear Information System (INIS)

    Buffington, J.C.

    1999-06-01

    This thesis presents an approach for financial modelling of natural gas in which connections between locations are incorporated and the complexities of forward curves in natural gas are considered. Apart from electricity, natural gas is the most volatile commodity traded. Its price is often dependent on the weather and price shocks can be felt across several geographic locations. This modelling approach incorporates multiple risk factors that correspond to various locations. One of the objectives was to determine if the model could be used for closed-form option prices. It was suggested that an adequate model for natural gas must consider 3 statistical properties: volatility term structure, backwardation and contango, and stochastic basis. Data from gas forward prices at Chicago, NYMEX and AECO were empirically tested to better understand these 3 statistical properties at each location and to verify if the proposed model truly incorporates these properties. In addition, this study examined the time series property of the difference of two locations (the basis) and determines that these empirical properties are consistent with the model properties. Closed-form option solutions were also developed for call options of forward contracts and call options on forward basis. The options were calibrated and compared to other models. The proposed model is capable of pricing options, but the prices derived did not pass the test of economic reasonableness. However, the model was able to capture the effect of transportation as well as aspects of seasonality which is a benefit over other existing models. It was determined that modifications will be needed regarding the estimation of the convenience yields. 57 refs., 2 tabs., 7 figs., 1 append

  20. Incorporating Environmental Justice into Second Generation Indices of Multiple Deprivation: Lessons from the UK and Progress Internationally

    Directory of Open Access Journals (Sweden)

    Jon Fairburn

    2016-07-01

    Full Text Available Second generation area-based indices of multiple deprivation have been extensively used in the UK over the last 15 years. They resulted from significant developments in political, technical, and conceptual spheres for deprivation data. We review the parallel development of environmental justice research and how and when environmental data was incorporated into these indices. We explain the transfer of these methods from the UK to Germany and assess the progress internationally in developing such indices. Finally, we illustrate how billions of pounds in the UK was allocated by using these tools to tackle neighbourhood deprivation and environmental justice to address the determinants of health.

  1. Incorporating Multiple Energy Relay Dyes in Liquid Dye-Sensitized Solar Cells

    KAUST Repository

    Yum, Jun-Ho

    2011-01-05

    Panchromatic response is essential to increase the light-harvesting efficiency in solar conversion systems. Herein we show increased light harvesting from using multiple energy relay dyes inside dye-sensitized solar cells. Additional photoresponse from 400-590 nm matching the optical window of the zinc phthalocyanine sensitizer was observed due to Förster resonance energy transfer (FRET) from the two energy relay dyes to the sensitizing dye. The complementary absorption spectra of the energy relay dyes and high excitation transfer efficiencies result in a 35% increase in photovoltaic performance. © 2011 Wiley-VCH Verlag GmbH& Co. KGaA.

  2. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    Science.gov (United States)

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests.

  3. Multiple model cardinalized probability hypothesis density filter

    Science.gov (United States)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  4. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  5. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework

    Directory of Open Access Journals (Sweden)

    Toh Hiroyuki

    2008-04-01

    Full Text Available Abstract Background Structural alignment of RNAs is becoming important, since the discovery of functional non-coding RNAs (ncRNAs. Recent studies, mainly based on various approximations of the Sankoff algorithm, have resulted in considerable improvement in the accuracy of pairwise structural alignment. In contrast, for the cases with more than two sequences, the practical merit of structural alignment remains unclear as compared to traditional sequence-based methods, although the importance of multiple structural alignment is widely recognized. Results We took a different approach from a straightforward extension of the Sankoff algorithm to the multiple alignments from the viewpoints of accuracy and time complexity. As a new option of the MAFFT alignment program, we developed a multiple RNA alignment framework, X-INS-i, which builds a multiple alignment with an iterative method incorporating structural information through two components: (1 pairwise structural alignments by an external pairwise alignment method such as SCARNA or LaRA and (2 a new objective function, Four-way Consistency, derived from the base-pairing probability of every sub-aligned group at every multiple alignment stage. Conclusion The BRAliBASE benchmark showed that X-INS-i outperforms other methods currently available in the sum-of-pairs score (SPS criterion. As a basis for predicting common secondary structure, the accuracy of the present method is comparable to or rather higher than those of the current leading methods such as RNA Sampler. The X-INS-i framework can be used for building a multiple RNA alignment from any combination of algorithms for pairwise RNA alignment and base-pairing probability. The source code is available at the webpage found in the Availability and requirements section.

  6. A MULTI-RESOLUTION FUSION MODEL INCORPORATING COLOR AND ELEVATION FOR SEMANTIC SEGMENTATION

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2017-05-01

    Full Text Available In recent years, the developments for Fully Convolutional Networks (FCN have led to great improvements for semantic segmentation in various applications including fused remote sensing data. There is, however, a lack of an in-depth study inside FCN models which would lead to an understanding of the contribution of individual layers to specific classes and their sensitivity to different types of input data. In this paper, we address this problem and propose a fusion model incorporating infrared imagery and Digital Surface Models (DSM for semantic segmentation. The goal is to utilize heterogeneous data more accurately and effectively in a single model instead of to assemble multiple models. First, the contribution and sensitivity of layers concerning the given classes are quantified by means of their recall in FCN. The contribution of different modalities on the pixel-wise prediction is then analyzed based on visualization. Finally, an optimized scheme for the fusion of layers with color and elevation information into a single FCN model is derived based on the analysis. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset. Comprehensive evaluations demonstrate the potential of the proposed approach.

  7. Benefits of incorporating spatial organisation of catchments for a semi-distributed hydrological model

    Science.gov (United States)

    Schumann, Andreas; Oppel, Henning

    2017-04-01

    To represent the hydrological behaviour of catchments a model should reproduce/reflect the hydrologically most relevant catchment characteristics. These are heterogeneously distributed within a watershed but often interrelated and subject of a certain spatial organisation. Since common models are mostly based on fundamental assumptions about hydrological processes, the reduction of variance of catchment properties as well as the incorporation of the spatial organisation of the catchment is desirable. We have developed a method that combines the idea of the width-function used for determination of the geomorphologic unit hydrograph with information about soil or topography. With this method we are able to assess the spatial organisation of selected catchment characteristics. An algorithm was developed that structures a watershed into sub-basins and other spatial units to minimise its heterogeneity. The outcomes of this algorithm are used for the spatial setup of a semi-distributed model. Since the spatial organisation of a catchment is not bound to a single characteristic, we have to embed information of multiple catchment properties. For this purpose we applied a fuzzy-based method to combine the spatial setup for multiple single characteristics into a union, optimal spatial differentiation. Utilizing this method, we are able to propose a spatial structure for a semi-distributed hydrological model, comprising the definition of sub-basins and a zonal classification within each sub-basin. Besides the improved spatial structuring, the performed analysis ameliorates modelling in another way. The spatial variability of catchment characteristics, which is considered by a minimum of heterogeneity in the zones, can be considered in a parameter constrained calibration scheme in a case study both options were used to explore the benefits of incorporating the spatial organisation and derived parameter constraints for the parametrisation of a HBV-96 model. We use two benchmark

  8. 75 FR 20265 - Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes

    Science.gov (United States)

    2010-04-19

    ... Office, 1701 Columbia Avenue, College Park, Georgia 30337; telephone: (404) 474-5524; facsimile: (404... Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes AGENCY: Federal Aviation...-08- 05, which applies to certain Liberty Aerospace Incorporated Model XL-2 airplanes. AD 2009-08-05...

  9. Loss given default models incorporating macroeconomic variables for credit cards

    OpenAIRE

    Crook, J.; Bellotti, T.

    2012-01-01

    Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...

  10. Incorporating Social System Dynamics into the Food-Energy-Water System Resilience-Sustainability Modeling Process

    Science.gov (United States)

    Givens, J.; Padowski, J.; Malek, K.; Guzman, C.; Boll, J.; Adam, J. C.; Witinok-Huber, R.

    2017-12-01

    In the face of climate change and multi-scalar governance objectives, achieving resilience of food-energy-water (FEW) systems requires interdisciplinary approaches. Through coordinated modeling and management efforts, we study "Innovations in the Food-Energy-Water Nexus (INFEWS)" through a case-study in the Columbia River Basin. Previous research on FEW system management and resilience includes some attention to social dynamics (e.g., economic, governance); however, more research is needed to better address social science perspectives. Decisions ultimately taken in this river basin would occur among stakeholders encompassing various institutional power structures including multiple U.S. states, tribal lands, and sovereign nations. The social science lens draws attention to the incompatibility between the engineering definition of resilience (i.e., return to equilibrium or a singular stable state) and the ecological and social system realities, more explicit in the ecological interpretation of resilience (i.e., the ability of a system to move into a different, possibly more resilient state). Social science perspectives include but are not limited to differing views on resilience as normative, system persistence versus transformation, and system boundary issues. To expand understanding of resilience and objectives for complex and dynamic systems, concepts related to inequality, heterogeneity, power, agency, trust, values, culture, history, conflict, and system feedbacks must be more tightly integrated into FEW research. We identify gaps in knowledge and data, and the value and complexity of incorporating social components and processes into systems models. We posit that socio-biophysical system resilience modeling would address important complex, dynamic social relationships, including non-linear dynamics of social interactions, to offer an improved understanding of sustainable management in FEW systems. Conceptual modeling that is presented in our study, represents

  11. Incorporating Contagion in Portfolio Credit Risk Models Using Network Theory

    NARCIS (Netherlands)

    Anagnostou, I.; Sourabh, S.; Kandhai, D.

    2018-01-01

    Portfolio credit risk models estimate the range of potential losses due to defaults or deteriorations in credit quality. Most of these models perceive default correlation as fully captured by the dependence on a set of common underlying risk factors. In light of empirical evidence, the ability of

  12. Incorporating measurement error in n = 1 psychological autoregressive modeling

    Science.gov (United States)

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  13. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  14. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  15. Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling

    Directory of Open Access Journals (Sweden)

    Dennis Fok

    2014-02-01

    Full Text Available We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.

  16. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    Science.gov (United States)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  17. A neural population model incorporating dopaminergic neurotransmission during complex voluntary behaviors.

    Directory of Open Access Journals (Sweden)

    Stefan Fürtinger

    2014-11-01

    Full Text Available Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number

  18. Incorporating pushing in exclusion-process models of cell migration.

    Science.gov (United States)

    Yates, Christian A; Parker, Andrew; Baker, Ruth E

    2015-05-01

    The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.

  19. Incorporating spiritual beliefs into a cognitive model of worry.

    Science.gov (United States)

    Rosmarin, David H; Pirutinsky, Steven; Auerbach, Randy P; Björgvinsson, Thröstur; Bigda-Peyton, Joseph; Andersson, Gerhard; Pargament, Kenneth I; Krumrei, Elizabeth J

    2011-07-01

    Cognitive theory and research have traditionally highlighted the relevance of the core beliefs about oneself, the world, and the future to human emotions. For some individuals, however, core beliefs may also explicitly involve spiritual themes. In this article, we propose a cognitive model of worry, in which positive/negative beliefs about the Divine affect symptoms through the mechanism of intolerance of uncertainty. Using mediation analyses, we found support for our model across two studies, in particular, with regards to negative spiritual beliefs. These findings highlight the importance of assessing for spiritual alongside secular convictions when creating cognitive-behavioral case formulations in the treatment of religious individuals. © 2011 Wiley Periodicals, Inc.

  20. Modelling toluene oxidation : Incorporation of mass transfer phenomena

    NARCIS (Netherlands)

    Hoorn, J.A.A.; van Soolingen, J.; Versteeg, G. F.

    The kinetics of the oxidation of toluene have been studied in close interaction with the gas-liquid mass transfer occurring in the reactor. Kinetic parameters for a simple model have been estimated on basis of experimental observations performed under industrial conditions. The conclusions for the

  1. Incorporating pion effects into the naive quark model

    International Nuclear Information System (INIS)

    Nogami, Y.; Ohtuska, N.

    1982-01-01

    A hybrid of the naive nonrelativistic quark model and the Chew-Low model is proposed. The pion is treated as an elementary particle which interacts with the ''bare baryon'' or ''baryon core'' via the Chew-Low interaction. The baryon core, which is the source of the pion interaction, is described by the naive nonrelativistic quark model. It turns out that the baryon-core radius has to be as large as 0.8 fm, and consequently the cutoff momentum Λ for the pion interaction is < or approx. =3m/sub π/, m/sub π/ being the pion mass. Because of this small Λ (as compared with Λapprox. nucleon mass in the old Chew-Low model) the effects of the pion cloud are strongly suppressed. The baryon masses, baryon magnetic moments, and the nucleon charge radii can be reproduced quite well. However, we found it singularly difficult to fit the axial-vector weak decay constant g/sub A/

  2. Making Invasion models useful for decision makers; incorporating uncertainty, knowledge gaps, and decision-making preferences

    Science.gov (United States)

    Denys Yemshanov; Frank H Koch; Mark Ducey

    2015-01-01

    Uncertainty is inherent in model-based forecasts of ecological invasions. In this chapter, we explore how the perceptions of that uncertainty can be incorporated into the pest risk assessment process. Uncertainty changes a decision maker’s perceptions of risk; therefore, the direct incorporation of uncertainty may provide a more appropriate depiction of risk. Our...

  3. Workforce scheduling: A new model incorporating human factors

    Directory of Open Access Journals (Sweden)

    Mohammed Othman

    2012-12-01

    Full Text Available Purpose: The majority of a company’s improvement comes when the right workers with the right skills, behaviors and capacities are deployed appropriately throughout a company. This paper considers a workforce scheduling model including human aspects such as skills, training, workers’ personalities, workers’ breaks and workers’ fatigue and recovery levels. This model helps to minimize the hiring, firing, training and overtime costs, minimize the number of fired workers with high performance, minimize the break time and minimize the average worker’s fatigue level.Design/methodology/approach: To achieve this objective, a multi objective mixed integer programming model is developed to determine the amount of hiring, firing, training and overtime for each worker type.Findings: The results indicate that the worker differences should be considered in workforce scheduling to generate realistic plans with minimum costs. This paper also investigates the effects of human fatigue and recovery on the performance of the production systems.Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the model such as the assumption of certainty of the demand in each period, and the linearity function of Fatigue accumulation and recovery curves. These assumptions can be relaxed in future work.Originality/value: In this research, a new model for integrating workers’ differences with workforce scheduling is proposed. To the authors' knowledge, it is the first time to study the effects of different important human factors such as human personality, skills and fatigue and recovery in the workforce scheduling process. This research shows that considering both technical and human factors together can reduce the costs in manufacturing systems and ensure the safety of the workers.

  4. Incorporating grassland management in a global vegetation model

    Science.gov (United States)

    Chang, Jinfeng; Viovy, Nicolas; Vuichard, Nicolas; Ciais, Philippe; Wang, Tao; Cozic, Anne; Lardy, Romain; Graux, Anne-Isabelle; Klumpp, Katja; Martin, Raphael; Soussana, Jean-François

    2013-04-01

    Grassland is a widespread vegetation type, covering nearly one-fifth of the world's land surface (24 million km2), and playing a significant role in the global carbon (C) cycle. Most of grasslands in Europe are cultivated to feed animals, either directly by grazing or indirectly by grass harvest (cutting). A better understanding of the C fluxes from grassland ecosystems in response to climate and management requires not only field experiments but also the aid of simulation models. ORCHIDEE process-based ecosystem model designed for large-scale applications treats grasslands as being unmanaged, where C / water fluxes are only subject to atmospheric CO2 and climate changes. Our study describes how management of grasslands is included in the ORCHIDEE, and how management affects modeled grassland-atmosphere CO2 fluxes. The new model, ORCHIDEE-GM (Grassland Management) is capable with a management module inspired from a grassland model (PaSim, version 5.0), of accounting for two grassland management practices (cutting and grazing). The evaluation of the results of ORCHIDEE-GM compared with those of ORCHIDEE at 11 European sites equipped with eddy covariance and biometric measurements, show that ORCHIDEE-GM can capture realistically the cut-induced seasonal variation in biometric variables (LAI: Leaf Area Index; AGB: Aboveground Biomass) and in CO2 fluxes (GPP: Gross Primary Productivity; TER: Total Ecosystem Respiration; and NEE: Net Ecosystem Exchange). But improvements at grazing sites are only marginal in ORCHIDEE-GM, which relates to the difficulty in accounting for continuous grazing disturbance and its induced complex animal-vegetation interactions. Both NEE and GPP on monthly to annual timescales can be better simulated in ORCHIDEE-GM than in ORCHIDEE without management. At some sites, the model-observation misfit in ORCHIDEE-GM is found to be more related to ill-constrained parameter values than to model structure. Additionally, ORCHIDEE-GM is able to simulate

  5. Incorporating Satellite Time-Series Data into Modeling

    Science.gov (United States)

    Gregg, Watson

    2008-01-01

    In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.

  6. Incorporating Contagion in Portfolio Credit Risk Models Using Network Theory

    Directory of Open Access Journals (Sweden)

    Ioannis Anagnostou

    2018-01-01

    Full Text Available Portfolio credit risk models estimate the range of potential losses due to defaults or deteriorations in credit quality. Most of these models perceive default correlation as fully captured by the dependence on a set of common underlying risk factors. In light of empirical evidence, the ability of such a conditional independence framework to accommodate for the occasional default clustering has been questioned repeatedly. Thus, financial institutions have relied on stressed correlations or alternative copulas with more extreme tail dependence. In this paper, we propose a different remedy—augmenting systematic risk factors with a contagious default mechanism which affects the entire universe of credits. We construct credit stress propagation networks and calibrate contagion parameters for infectious defaults. The resulting framework is implemented on synthetic test portfolios wherein the contagion effect is shown to have a significant impact on the tails of the loss distributions.

  7. Multiple model adaptive control with mixing

    Science.gov (United States)

    Kuipers, Matthew

    Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed

  8. Incorporation of intraocular scattering in schematic eye models

    International Nuclear Information System (INIS)

    Navarro, R.

    1985-01-01

    Beckmann's theory of scattering from rough surfaces is applied to obtain, from the experimental veiling glare functions, a diffuser that when placed at the pupil plane would produce the same scattering halo as the ocular media. This equivalent diffuser is introduced in a schematic eye model, and its influence on the point-spread function and the modulation-transfer function of the eye is analyzed

  9. Constitutive modeling of coronary artery bypass graft with incorporated torsion

    Czech Academy of Sciences Publication Activity Database

    Horný, L.; Chlup, Hynek; Žitný, R.; Adámek, T.

    2009-01-01

    Roč. 49, č. 2 (2009), s. 273-277 ISSN 0543-5846 R&D Projects: GA ČR(CZ) GA106/08/0557 Institutional research plan: CEZ:AV0Z20760514 Keywords : coronary artery bypass graft * constitutive model * digital image correlation Subject RIV: BJ - Thermodynamics Impact factor: 0.439, year: 2009 http://web.tuke.sk/sjf-kamam/mmams2009/contents.pdf

  10. Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling

    Science.gov (United States)

    Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom

    2018-03-01

    Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.

  11. Incorporating H2 Dynamics and Inhibition into a Microbially Based Methanogenesis Model for Restored Wetland Sediments

    Science.gov (United States)

    Pal, David; Jaffe, Peter

    2015-04-01

    Estimates of global CH4 emissions from wetlands indicate that wetlands are the largest natural source of CH4 to the atmosphere. In this paper, we propose that there is a missing component to these models that should be addressed. CH4 is produced in wetland sediments from the microbial degradation of organic carbon through multiple fermentation steps and methanogenesis pathways. There are multiple sources of carbon for methananogenesis; in vegetated wetland sediments, microbial communities consume root exudates as a major source of organic carbon. In many methane models propionate is used as a model carbon molecule. This simple sugar is fermented into acetate and H2, acetate is transformed to methane and CO2, while the H2 and CO2 are used to form an additional CH4 molecule. The hydrogenotrophic pathway involves the equilibrium of two dissolved gases, CH4 and H2. In an effort to limit CH4 emissions from wetlands, there has been growing interest in finding ways to limit plant transport of soil gases through root systems. Changing planted species, or genetically modifying new species of plants may control this transport of soil gases. While this may decrease the direct emissions of methane, there is little understanding about how H2 dynamics may feedback into overall methane production. The results of an incubation study were combined with a new model of propionate degradation for methanogenesis that also examines other natural parameters (i.e. gas transport through plants). This presentation examines how we would expect this model to behave in a natural field setting with changing sulfate and carbon loading schemes. These changes can be controlled through new plant species and other management practices. Next, we compare the behavior of two variations of this model, with or without the incorporation of H2 interactions, with changing sulfate, carbon loading and root volatilization. Results show that while the models behave similarly there may be a discrepancy of nearly

  12. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  13. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  14. Incorporation particle creation and annihilation into Bohm's Pilot Wave model

    Energy Technology Data Exchange (ETDEWEB)

    Sverdlov, Roman [Raman Research Institute, C.V. Raman Avenue, Sadashiva Nagar, Bangalore, Karnataka, 560080 (India)

    2011-07-08

    The purpose of this paper is to come up with a Pilot Wave model of quantum field theory that incorporates particle creation and annihilation without sacrificing determinism; this theory is subsequently coupled with gravity.

  15. INCORPORATION OF MECHANISTIC INFORMATION IN THE ARSENIC PBPK MODEL DEVELOPMENT PROCESS

    Science.gov (United States)

    INCORPORATING MECHANISTIC INSIGHTS IN A PBPK MODEL FOR ARSENICElaina M. Kenyon, Michael F. Hughes, Marina V. Evans, David J. Thomas, U.S. EPA; Miroslav Styblo, University of North Carolina; Michael Easterling, Analytical Sciences, Inc.A physiologically based phar...

  16. High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture

    Science.gov (United States)

    2017-11-22

    High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating

  17. Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa

    OpenAIRE

    Edy Legowo

    2017-01-01

    Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilita...

  18. The prediction of type 1 diabetes by multiple autoantibody levels and their incorporation into an autoantibody risk score in relatives of type 1 diabetic patients.

    Science.gov (United States)

    Sosenko, Jay M; Skyler, Jay S; Palmer, Jerry P; Krischer, Jeffrey P; Yu, Liping; Mahon, Jeffrey; Beam, Craig A; Boulware, David C; Rafkin, Lisa; Schatz, Desmond; Eisenbarth, George

    2013-09-01

    We assessed whether a risk score that incorporates levels of multiple islet autoantibodies could enhance the prediction of type 1 diabetes (T1D). TrialNet Natural History Study participants (n = 784) were tested for three autoantibodies (GADA, IA-2A, and mIAA) at their initial screening. Samples from those positive for at least one autoantibody were subsequently tested for ICA and ZnT8A. An autoantibody risk score (ABRS) was developed from a proportional hazards model that combined autoantibody levels from each autoantibody along with their designations of positivity and negativity. The ABRS was strongly predictive of T1D (hazard ratio [with 95% CI] 2.72 [2.23-3.31], P < 0.001). Receiver operating characteristic curve areas (with 95% CI) for the ABRS revealed good predictability (0.84 [0.78-0.90] at 2 years, 0.81 [0.74-0.89] at 3 years, P < 0.001 for both). The composite of levels from the five autoantibodies was predictive of T1D before and after an adjustment for the positivity or negativity of autoantibodies (P < 0.001). The findings were almost identical when ICA was excluded from the risk score model. The combination of the ABRS and the previously validated Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) predicted T1D more accurately (0.93 [0.88-0.98] at 2 years, 0.91 [0.83-0.99] at 3 years) than either the DPTRS or the ABRS alone (P ≤ 0.01 for all comparisons). These findings show the importance of considering autoantibody levels in assessing the risk of T1D. Moreover, levels of multiple autoantibodies can be incorporated into an ABRS that accurately predicts T1D.

  19. Application of multiple objective models to water resources planning and management

    International Nuclear Information System (INIS)

    North, R.M.

    1993-01-01

    Over the past 30 years, we have seen the birth and growth of multiple objective analysis from an idea without tools to one with useful applications. Models have been developed and applications have been researched to address the multiple purposes and objectives inherent in the development and management of water resources. A practical approach to multiple objective modelling incorporates macroeconomic-based policies and expectations in order to optimize the results from both engineering (structural) and management (non-structural) alternatives, while taking into account the economic and environmental trade-offs. (author). 27 refs, 4 figs, 3 tabs

  20. Developing Baltic cod recruitment models II : Incorporation of environmental variability and species interaction

    DEFF Research Database (Denmark)

    Köster, Fritz; Hinrichsen, H.H.; St. John, Michael

    2001-01-01

    We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... cod in these areas, suggesting that key biotic and abiotic processes can be successfully incorporated into recruitment models....... survival of early life stages and varying systematically among spawning sites were incorporated into stock-recruitment models, first for major cod spawning sites and then combined for the entire Central Baltic. Variables identified included potential egg production by the spawning stock, abiotic conditions...

  1. Model Pembelajaran Berbasis Penstimulasian Multiple Intelligences Siswa

    Directory of Open Access Journals (Sweden)

    Edy Legowo

    2017-03-01

    Full Text Available Tulisan ini membahas mengenai penerapan teori multiple intelligences dalam pembelajaran di sekolah. Pembahasan diawali dengan menguraikan perkembangan konsep inteligensi dan multiple intelligences. Diikuti dengan menjelaskan dampak teori multiple intelligences dalam bidang pendidikan dan pembelajaran di sekolah. Bagian selanjutnya menguraikan tentang implementasi teori multiple intelligences dalam praktik pembelajaran di kelas yaitu bagaimana pemberian pengalaman belajar siswa yang difasilitasi guru dapat menstimulasi multiple intelligences siswa. Evaluasi hasil belajar siswa dari pandangan penerapan teori multiple intelligences seharusnya dilakukan menggunakan authentic assessment dan portofolio yang lebih memfasilitasi para siswa mengungkapkan atau mengaktualisasikan hasil belajarnya melalui berbagai cara sesuai dengan kekuatan jenis inteligensinya.

  2. Multiple Temperature Model for Near Continuum Flows

    International Nuclear Information System (INIS)

    XU, Kun; Liu, Hongwei; Jiang, Jianzheng

    2007-01-01

    In the near continuum flow regime, the flow may have different translational temperatures in different directions. It is well known that for increasingly rarefied flow fields, the predictions from continuum formulation, such as the Navier-Stokes equations, lose accuracy. These inaccuracies may be partially due to the single temperature assumption in the Navier-Stokes equations. Here, based on the gas-kinetic Bhatnagar-Gross-Krook (BGK) equation, a multitranslational temperature model is proposed and used in the flow calculations. In order to fix all three translational temperatures, two constraints are additionally proposed to model the energy exchange in different directions. Based on the multiple temperature assumption, the Navier-Stokes relation between the stress and strain is replaced by the temperature relaxation term, and the Navier-Stokes assumption is recovered only in the limiting case when the flow is close to the equilibrium with the same temperature in different directions. In order to validate the current model, both the Couette and Poiseuille flows are studied in the transition flow regime

  3. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  4. Incorporating wind availability into land use regression modelling of air quality in mountainous high-density urban environment.

    Science.gov (United States)

    Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward

    2017-08-01

    Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Analysis of the residential location choice and household energy consumption behavior by incorporating multiple self-selection effects

    International Nuclear Information System (INIS)

    Yu Biying; Junyi Zhang; Fujiwara, Akimasa

    2012-01-01

    It is expected that the residential location choice and household energy consumption behavior might correlate with each other. Besides, due to the existence of self-selection effects, the observed inter-relationship between them might be the spurious result of the fact that some unobserved variables are causing both. These concerns motivate us to (1) consider residential location choice and household energy consumption behavior (for both in-home appliances and out-of-home cars) simultaneously and, (2) explicitly control self-selection effects so as to capture a relatively true effect of land-use policy on household energy consumption behavior. An integrated model termed as joint mixed Multinomial Logit-Multiple Discrete-Continuous Extreme Value model is presented here to identify the sensitivity of household energy consumption to land use policy by considering multiple self-selection effects. The model results indicate that land-use policy do play a great role in changing Beijing residents’ energy consumption pattern, while the self-selection effects cannot be ignored when evaluating the effect of land-use policy. Based on the policy scenario design, it is found that increasing recreational facilities and bus lines in the neighborhood can greatly promote household's energy-saving behavior. Additionally, the importance of “soft policy” and package policy is also emphasized in the context of Beijing. - Highlights: ► Representing residential choice and household energy consumption behavior jointly. ► Land use policy is found effective to control the household energy use in Beijing. ► Multiple self-selection effects are posed to get the true effect of land use policy. ► Significant self-selection effects call an attention to the soft policy in Beijing. ► The necessity of package policy on saving Beijing residents’ energy use is confirmed.

  6. Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH

    International Nuclear Information System (INIS)

    Niemi, A.; Bodvarsson, G.S.; Pruess, K.

    1991-11-01

    As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user's manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented

  7. A minimal model for multiple epidemics and immunity spreading.

    Directory of Open Access Journals (Sweden)

    Kim Sneppen

    Full Text Available Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  8. A minimal model for multiple epidemics and immunity spreading.

    Science.gov (United States)

    Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan

    2010-10-18

    Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  9. Comparison in the calculation of committed effective dose using the ICRP 30 and ICRP 60 models for a repeated incorporation by inhalation of I-125

    International Nuclear Information System (INIS)

    Carreno P, A.L.; Cortes C, A.; Alonso V, G.; Serrano P, F.

    2005-01-01

    Presently work, a comparison in the calculation of committed effective dose using the models of the ICRP 30 and those of the ICRP 60 for the analysis of internal dose due to repeated incorporation of I-125 is shown. The estimations of incorporated activity are obtained starting from the proportionate data for an exercise of inter comparison, with which it should be determined the internal dose later on. For to estimate the initial activity incorporated by repeated dose was assumed that this it was given through of multiple individual incorporations which happened in the middle points of the monitoring periods. The results using the models of the ICRP 30 and of the ICRP 60 are compared and the causes of the differences are analyzed. (Author)

  10. Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation

    KAUST Repository

    Li, Muxingzi

    2017-01-01

    of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous

  11. Effect of boron incorporation on the structural and photoluminescence properties of highly-strained InxGa1-xAs/GaAs multiple quantum wells

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2013-07-01

    Full Text Available In this research, 5-period highly-strained BInGaAs/GaAs multiple quantum wells (MQWs have been successfully grown at 480-510ºC by LP-MOCVD. Room-temperature photoluminescence (RT-PL measurements of BInGaAs/GaAs MQWs showed the peak wavelength as long as 1.17 μm with full-width at half maximum (FWHM of only 29.5 meV. In addition, a slight blue-shift (∼18 meV of PL peak energy of InxGa1-xAs/GaAs MQWs was observed after boron incorporation. It has been found boron incorporation ( 40%, the positive effect of boron incorporation prevailed, i.e., boron incorporation completely suppressed the thickness undulation and lead to the improvement of PL properties.

  12. Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation

    OpenAIRE

    Min Yan; Xin Tian; Zengyuan Li; Erxue Chen; Xufeng Wang; Zongtao Han; Hong Sun

    2016-01-01

    This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC) using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17) model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST) sensitivity analysis. Then the optimized MOD_17 mo...

  13. Incorporating Psychological Predictors of Treatment Response into Health Economic Simulation Models: A Case Study in Type 1 Diabetes.

    Science.gov (United States)

    Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan

    2015-10-01

    . Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies.

  14. Double-multiple streamtube model for studying vertical-axis wind turbines

    Science.gov (United States)

    Paraschivoiu, Ion

    1988-08-01

    This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.

  15. Testing for Nonuniform Differential Item Functioning with Multiple Indicator Multiple Cause Models

    Science.gov (United States)

    Woods, Carol M.; Grimm, Kevin J.

    2011-01-01

    In extant literature, multiple indicator multiple cause (MIMIC) models have been presented for identifying items that display uniform differential item functioning (DIF) only, not nonuniform DIF. This article addresses, for apparently the first time, the use of MIMIC models for testing both uniform and nonuniform DIF with categorical indicators. A…

  16. Multiplicity distributions in the dual parton model

    International Nuclear Information System (INIS)

    Batunin, A.V.; Tolstenkov, A.N.

    1985-01-01

    Multiplicity distributions are calculated by means of a new mechanism of production of hadrons in a string, which was proposed previously by the authors and takes into account explicitly the valence character of the ends of the string. It is shown that allowance for this greatly improves the description of the low-energy multiplicity distributions. At superhigh energies, the contribution of the ends of the strings becomes negligibly small, but in this case multi-Pomeron contributions must be taken into account

  17. Evolving Four Part Harmony Using a Multiple Worlds Model

    DEFF Research Database (Denmark)

    Scirea, Marco; Brown, Joseph Alexander

    2015-01-01

    This application of the Multiple Worlds Model examines a collaborative fitness model for generating four part harmonies. In this model we have multiple populations and the fitness of the individuals is based on the ability of a member from each population to work with the members of other...

  18. Incorporation of composite defects from ultrasonic NDE into CAD and FE models

    Science.gov (United States)

    Bingol, Onur Rauf; Schiefelbein, Bryan; Grandin, Robert J.; Holland, Stephen D.; Krishnamurthy, Adarsh

    2017-02-01

    Fiber-reinforced composites are widely used in aerospace industry due to their combined properties of high strength and low weight. However, owing to their complex structure, it is difficult to assess the impact of manufacturing defects and service damage on their residual life. While, ultrasonic testing (UT) is the preferred NDE method to identify the presence of defects in composites, there are no reasonable ways to model the damage and evaluate the structural integrity of composites. We have developed an automated framework to incorporate flaws and known composite damage automatically into a finite element analysis (FEA) model of composites, ultimately aiding in accessing the residual life of composites and make informed decisions regarding repairs. The framework can be used to generate a layer-by-layer 3D structural CAD model of the composite laminates replicating their manufacturing process. Outlines of structural defects, such as delaminations, are automatically detected from UT of the laminate and are incorporated into the CAD model between the appropriate layers. In addition, the framework allows for direct structural analysis of the resulting 3D CAD models with defects by automatically applying the appropriate boundary conditions. In this paper, we show a working proof-of-concept for the composite model builder with capabilities of incorporating delaminations between laminate layers and automatically preparing the CAD model for structural analysis using a FEA software.

  19. Explaining clinical behaviors using multiple theoretical models

    Directory of Open Access Journals (Sweden)

    Eccles Martin P

    2012-10-01

    the five surveys. For the predictor variables, the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. Conclusions We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.

  20. Explaining clinical behaviors using multiple theoretical models.

    Science.gov (United States)

    Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie

    2012-10-17

    , the mean construct scores were above the mid-point on the scale with median values across the five behaviors generally being above four out of seven and the range being from 1.53 to 6.01. Across all of the theories, the highest proportion of the variance explained was always for intention and the lowest was for behavior. The Knowledge-Attitudes-Behavior Model performed poorly across all behaviors and dependent variables; CSSRM also performed poorly. For TPB, SCT, II, and LT across the five behaviors, we predicted median R2 of 25% to 42.6% for intention, 6.2% to 16% for behavioral simulation, and 2.4% to 6.3% for behavior. We operationalized multiple theories measuring across five behaviors. Continuing challenges that emerge from our work are: better specification of behaviors, better operationalization of theories; how best to appropriately extend the range of theories; further assessment of the value of theories in different settings and groups; exploring the implications of these methods for the management of chronic diseases; and moving to experimental designs to allow an understanding of behavior change.

  1. Incorporating Social Anxiety Into a Model of College Problem Drinking: Replication and Extension

    OpenAIRE

    Ham, Lindsay S.; Hope, Debra A.

    2006-01-01

    Although research has found an association between social anxiety and alcohol use in noncollege samples, results have been mixed for college samples. College students face many novel social situations in which they may drink to reduce social anxiety. In the current study, the authors tested a model of college problem drinking, incorporating social anxiety and related psychosocial variables among 228 undergraduate volunteers. According to structural equation modeling (SEM) results, social anxi...

  2. PWR plant operator training used full scope simulator incorporated MAAP model

    International Nuclear Information System (INIS)

    Matsumoto, Y.; Tabuchi, T.; Yamashita, T.; Komatsu, Y.; Tsubouchi, K.; Banka, T.; Mochizuki, T.; Nishimura, K.; Iizuka, H.

    2015-01-01

    NTC makes an effort with the understanding of plant behavior of core damage accident as part of our advanced training. For the Fukushima Daiichi Nuclear Power Station accident, we introduced the MAAP model into PWR operator training full scope simulator and also made the Severe Accident Visual Display unit. From 2014, we will introduce new training program for a core damage accident with PWR operator training full scope simulator incorporated the MAAP model and the Severe Accident Visual Display unit. (author)

  3. Integration of multiple, excess, backup, and expected covering models

    OpenAIRE

    M S Daskin; K Hogan; C ReVelle

    1988-01-01

    The concepts of multiple, excess, backup, and expected coverage are defined. Model formulations using these constructs are reviewed and contrasted to illustrate the relationships between them. Several new formulations are presented as is a new derivation of the expected covering model which indicates more clearly the relationship of the model to other multi-state covering models. An expected covering model with multiple time standards is also presented.

  4. A climatological model for risk computations incorporating site- specific dry deposition influences

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.

    1991-07-01

    A gradient-flux dry deposition module was developed for use in a climatological atmospheric transport model, the Multimedia Environmental Pollutant Assessment System (MEPAS). The atmospheric pathway model computes long-term average contaminant air concentration and surface deposition patterns surrounding a potential release site incorporating location-specific dry deposition influences. Gradient-flux formulations are used to incorporate site and regional data in the dry deposition module for this atmospheric sector-average climatological model. Application of these formulations provide an effective means of accounting for local surface roughness in deposition computations. Linkage to a risk computation module resulted in a need for separate regional and specific surface deposition computations. 13 refs., 4 figs., 2 tabs

  5. In silico investigation of the short QT syndrome, using human ventricle models incorporating electromechanical coupling

    Directory of Open Access Journals (Sweden)

    Ismail eAdeniran

    2013-07-01

    Full Text Available Introduction Genetic forms of the Short QT Syndrome (SQTS arise due to cardiac ion channel mutations leading to accelerated ventricular repolarisation, arrhythmias and sudden cardiac death. Results from experimental and simulation studies suggest that changes to refractoriness and tissue vulnerability produce a substrate favourable to re-entry. Potential electromechanical consequences of the SQTS are less well understood. The aim of this study was to utilize electromechanically coupled human ventricle models to explore electromechanical consequences of the SQTS. Methods and results: The Rice et al. mechanical model was coupled to the ten Tusscher et al. ventricular cell model. Previously validated K+ channel formulations for SQT variants 1 and 3 were incorporated. Functional effects of the SQTS mutations on transients, sarcomere length shortening and contractile force at the single cell level were evaluated with and without the consideration of stretch activated channel current (Isac. Without Isac, the SQTS mutations produced dramatic reductions in the amplitude of transients, sarcomere length shortening and contractile force. When Isac was incorporated, there was a considerable attenuation of the effects of SQTS-associated action potential shortening on Ca2+ transients, sarcomere shortening and contractile force. Single cell models were then incorporated into 3D human ventricular tissue models. The timing of maximum deformation was delayed in the SQTS setting compared to control. Conclusion: The incorporation of Isac appears to be an important consideration in modelling functional effects of SQT 1 and 3 mutations on cardiac electro-mechanical coupling. Whilst there is little evidence of profoundly impaired cardiac contractile function in SQTS patients, our 3D simulations correlate qualitatively with reported evidence for dissociation between ventricular repolarization and the end of mechanical systole.

  6. Improving Watershed-Scale Hydrodynamic Models by Incorporating Synthetic 3D River Bathymetry Network

    Science.gov (United States)

    Dey, S.; Saksena, S.; Merwade, V.

    2017-12-01

    Digital Elevation Models (DEMs) have an incomplete representation of river bathymetry, which is critical for simulating river hydrodynamics in flood modeling. Generally, DEMs are augmented with field collected bathymetry data, but such data are available only at individual reaches. Creating a hydrodynamic model covering an entire stream network in the basin requires bathymetry for all streams. This study extends a conceptual bathymetry model, River Channel Morphology Model (RCMM), to estimate the bathymetry for an entire stream network for application in hydrodynamic modeling using a DEM. It is implemented at two large watersheds with different relief and land use characterizations: coastal Guadalupe River basin in Texas with flat terrain and a relatively urban White River basin in Indiana with more relief. After bathymetry incorporation, both watersheds are modeled using HEC-RAS (1D hydraulic model) and Interconnected Pond and Channel Routing (ICPR), a 2-D integrated hydrologic and hydraulic model. A comparison of the streamflow estimated by ICPR at the outlet of the basins indicates that incorporating bathymetry influences streamflow estimates. The inundation maps show that bathymetry has a higher impact on flat terrains of Guadalupe River basin when compared to the White River basin.

  7. A test for the parameters of multiple linear regression models ...

    African Journals Online (AJOL)

    A test for the parameters of multiple linear regression models is developed for conducting tests simultaneously on all the parameters of multiple linear regression models. The test is robust relative to the assumptions of homogeneity of variances and absence of serial correlation of the classical F-test. Under certain null and ...

  8. Using structured decision making with landowners to address private forest management and parcelization: balancing multiple objectives and incorporating uncertainty

    Science.gov (United States)

    Paige F. B. Ferguson; Michael J. Conroy; John F. Chamblee; Jeffrey Hepinstall-Cymerman

    2015-01-01

    Parcelization and forest fragmentation are of concern for ecological, economic, and social reasons. Efforts to keep large, private forests intact may be supported by a decision-making process that incorporates landowners’ objectives and uncertainty. We used structured decision making (SDM) with owners of large, private forests in Macon County, North Carolina....

  9. Making a difference: incorporating theories of autonomy into models of informed consent.

    Science.gov (United States)

    Delany, C

    2008-09-01

    Obtaining patients' informed consent is an ethical and legal obligation in healthcare practice. Whilst the law provides prescriptive rules and guidelines, ethical theories of autonomy provide moral foundations. Models of practice of consent, have been developed in the bioethical literature to assist in understanding and integrating the ethical theory of autonomy and legal obligations into the clinical process of obtaining a patient's informed consent to treatment. To review four models of consent and analyse the way each model incorporates the ethical meaning of autonomy and how, as a consequence, they might change the actual communicative process of obtaining informed consent within clinical contexts. An iceberg framework of consent is used to conceptualise how ethical theories of autonomy are positioned and underpin the above surface, and visible clinical communication, including associated legal guidelines and ethical rules. Each model of consent is critically reviewed from the perspective of how it might shape the process of informed consent. All four models would alter the process of obtaining consent. Two models provide structure and guidelines for the content and timing of obtaining patients' consent. The two other models rely on an attitudinal shift in clinicians. They provide ideas for consent by focusing on underlying values, attitudes and meaning associated with the ethical meaning of autonomy. The paper concludes that models of practice that explicitly incorporate the underlying ethical meaning of autonomy as their basis, provide less prescriptive, but more theoretically rich guidance for healthcare communicative practices.

  10. Incorporation of human factors into ship collision risk models focusing on human centred design aspects

    International Nuclear Information System (INIS)

    Sotiralis, P.; Ventikos, N.P.; Hamann, R.; Golyshev, P.; Teixeira, A.P.

    2016-01-01

    This paper presents an approach that more adequately incorporates human factor considerations into quantitative risk analysis of ship operation. The focus is on the collision accident category, which is one of the main risk contributors in ship operation. The approach is based on the development of a Bayesian Network (BN) model that integrates elements from the Technique for Retrospective and Predictive Analysis of Cognitive Errors (TRACEr) and focuses on the calculation of the collision accident probability due to human error. The model takes into account the human performance in normal, abnormal and critical operational conditions and implements specific tasks derived from the analysis of the task errors leading to the collision accident category. A sensitivity analysis is performed to identify the most important contributors to human performance and ship collision. Finally, the model developed is applied to assess the collision risk of a feeder operating in Dover strait using the collision probability estimated by the developed BN model and an Event tree model for calculation of human, economic and environmental risks. - Highlights: • A collision risk model for the incorporation of human factors into quantitative risk analysis is proposed. • The model takes into account the human performance in different operational conditions leading to the collision. • The most important contributors to human performance and ship collision are identified. • The model developed is applied to assess the collision risk of a feeder operating in Dover strait.

  11. Modeling fraud detection and the incorporation of forensic specialists in the audit process

    DEFF Research Database (Denmark)

    Sakalauskaite, Dominyka

    Financial statement audits are still comparatively poor in fraud detection. Forensic specialists can play a significant role in increasing audit quality. In this paper, based on prior academic research, I develop a model of fraud detection and the incorporation of forensic specialists in the audit...... process. The intention of the model is to identify the reasons why the audit is weak in fraud detection and to provide the analytical framework to assess whether the incorporation of forensic specialists can help to improve it. The results show that such specialists can potentially improve the fraud...... detection in the audit, but might also cause some negative implications. Overall, even though fraud detection is one of the main topics in research there are very few studies done on the subject of how auditors co-operate with forensic specialists. Thus, the paper concludes with suggestions for further...

  12. Gold Incorporated Mesoporous Silica Thin Film Model Surface as a Robust SERS and Catalytically Active Substrate

    Directory of Open Access Journals (Sweden)

    Anandakumari Chandrasekharan Sunil Sekhar

    2016-05-01

    Full Text Available Ultra-small gold nanoparticles incorporated in mesoporous silica thin films with accessible pore channels perpendicular to the substrate are prepared by a modified sol-gel method. The simple and easy spin coating technique is applied here to make homogeneous thin films. The surface characterization using FESEM shows crack-free films with a perpendicular pore arrangement. The applicability of these thin films as catalysts as well as a robust SERS active substrate for model catalysis study is tested. Compared to bare silica film our gold incorporated silica, GSM-23F gave an enhancement factor of 103 for RhB with a laser source 633 nm. The reduction reaction of p-nitrophenol with sodium borohydride from our thin films shows a decrease in peak intensity corresponding to –NO2 group as time proceeds, confirming the catalytic activity. Such model surfaces can potentially bridge the material gap between a real catalytic system and surface science studies.

  13. Using expert knowledge to incorporate uncertainty in cause-of-death assignments for modeling of cause-specific mortality

    Science.gov (United States)

    Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.

    2018-01-01

    Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection

  14. Multiple Scenario Generation of Subsurface Models

    DEFF Research Database (Denmark)

    Cordua, Knud Skou

    of information is obeyed such that no unknown assumptions and biases influence the solution to the inverse problem. This involves a definition of the probabilistically formulated inverse problem, a discussion about how prior models can be established based on statistical information from sample models...... of the probabilistic formulation of the inverse problem. This function is based on an uncertainty model that describes the uncertainties related to the observed data. In a similar way, a formulation of the prior probability distribution that takes into account uncertainties related to the sample model statistics...... similar to observation uncertainties. We refer to the effect of these approximations as modeling errors. Examples that show how the modeling error is estimated are provided. Moreover, it is shown how these effects can be taken into account in the formulation of the posterior probability distribution...

  15. Optimized production planning model for a multi-plant cultivation system under uncertainty

    Science.gov (United States)

    Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng

    2015-02-01

    An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.

  16. Multiple system modelling of waste management

    International Nuclear Information System (INIS)

    Eriksson, Ola; Bisaillon, Mattias

    2011-01-01

    Highlights: → Linking of models will provide a more complete, correct and credible picture of the systems. → The linking procedure is easy to perform and also leads to activation of project partners. → The simulation procedure is a bit more complicated and calls for the ability to run both models. - Abstract: Due to increased environmental awareness, planning and performance of waste management has become more and more complex. Therefore waste management has early been subject to different types of modelling. Another field with long experience of modelling and systems perspective is energy systems. The two modelling traditions have developed side by side, but so far there are very few attempts to combine them. Waste management systems can be linked together with energy systems through incineration plants. The models for waste management can be modelled on a quite detailed level whereas surrounding systems are modelled in a more simplistic way. This is a problem, as previous studies have shown that assumptions on the surrounding system often tend to be important for the conclusions. In this paper it is shown how two models, one for the district heating system (MARTES) and another one for the waste management system (ORWARE), can be linked together. The strengths and weaknesses with model linking are discussed when compared to simplistic assumptions on effects in the energy and waste management systems. It is concluded that the linking of models will provide a more complete, correct and credible picture of the consequences of different simultaneous changes in the systems. The linking procedure is easy to perform and also leads to activation of project partners. However, the simulation procedure is a bit more complicated and calls for the ability to run both models.

  17. Three-dimensional simulations of Bingham plastic flows with the multiple-relaxation-time lattice Boltzmann model

    OpenAIRE

    Song-Gui Chen; Chuan-Hu Zhang; Yun-Tian Feng; Qi-Cheng Sun; Feng Jin

    2016-01-01

    This paper presents a three-dimensional (3D) parallel multiple-relaxation-time lattice Boltzmann model (MRT-LBM) for Bingham plastics which overcomes numerical instabilities in the simulation of non-Newtonian fluids for the Bhatnagar–Gross–Krook (BGK) model. The MRT-LBM and several related mathematical models are briefly described. Papanastasiou’s modified model is incorporated for better numerical stability. The impact of the relaxation parameters of the model is studied in detail. The MRT-L...

  18. Mean multiplicity in the Regge models with rising cross sections

    International Nuclear Information System (INIS)

    Chikovani, Z.E.; Kobylisky, N.A.; Martynov, E.S.

    1979-01-01

    Behaviour of the mean multiplicity and the total cross section σsub(t) of hadron-hadron interactions is considered in the framework of the Regge models at high energies. Generating function was plotted for models of dipole and froissaron, and the mean multiplicity and multiplicity moments were calculated. It is shown that approximately ln 2 S (energy square) in the dipole model, which is in good agreement with the experiment. It is also found that in various Regge models approximately σsub(t)lnS

  19. Discrete choice models with multiplicative error terms

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term ε. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due...

  20. Structural model analysis of multiple quantitative traits.

    Directory of Open Access Journals (Sweden)

    Renhua Li

    2006-07-01

    Full Text Available We introduce a method for the analysis of multilocus, multitrait genetic data that provides an intuitive and precise characterization of genetic architecture. We show that it is possible to infer the magnitude and direction of causal relationships among multiple correlated phenotypes and illustrate the technique using body composition and bone density data from mouse intercross populations. Using these techniques we are able to distinguish genetic loci that affect adiposity from those that affect overall body size and thus reveal a shortcoming of standardized measures such as body mass index that are widely used in obesity research. The identification of causal networks sheds light on the nature of genetic heterogeneity and pleiotropy in complex genetic systems.

  1. Incorporation of stochastic engineering models as prior information in Bayesian medical device trials.

    Science.gov (United States)

    Haddad, Tarek; Himes, Adam; Thompson, Laura; Irony, Telba; Nair, Rajesh

    2017-01-01

    Evaluation of medical devices via clinical trial is often a necessary step in the process of bringing a new product to market. In recent years, device manufacturers are increasingly using stochastic engineering models during the product development process. These models have the capability to simulate virtual patient outcomes. This article presents a novel method based on the power prior for augmenting a clinical trial using virtual patient data. To properly inform clinical evaluation, the virtual patient model must simulate the clinical outcome of interest, incorporating patient variability, as well as the uncertainty in the engineering model and in its input parameters. The number of virtual patients is controlled by a discount function which uses the similarity between modeled and observed data. This method is illustrated by a case study of cardiac lead fracture. Different discount functions are used to cover a wide range of scenarios in which the type I error rates and power vary for the same number of enrolled patients. Incorporation of engineering models as prior knowledge in a Bayesian clinical trial design can provide benefits of decreased sample size and trial length while still controlling type I error rate and power.

  2. Multiple-lesion track-structure model

    International Nuclear Information System (INIS)

    Wilson, J.W.; Cucinotta, F.A.; Shinn, J.L.

    1992-03-01

    A multilesion cell kinetic model is derived, and radiation kinetic coefficients are related to the Katz track structure model. The repair-related coefficients are determined from the delayed plating experiments of Yang et al. for the C3H10T1/2 cell system. The model agrees well with the x ray and heavy ion experiments of Yang et al. for the immediate plating, delaying plating, and fractionated exposure protocols employed by Yang. A study is made of the effects of target fragments in energetic proton exposures and of the repair-deficient target-fragment-induced lesions

  3. Affine LIBOR Models with Multiple Curves

    DEFF Research Database (Denmark)

    Grbac, Zorana; Papapantoleon, Antonis; Schoenmakers, John

    2015-01-01

    are specified following the methodology of the affine LIBOR models and are driven by the wide and flexible class of affine processes. The affine property is preserved under forward measures, which allows us to derive Fourier pricing formulas for caps, swaptions, and basis swaptions. A model specification...... with dependent LIBOR rates is developed that allows for an efficient and accurate calibration to a system of caplet prices....

  4. Insights on Genomic and Molecular Alterations in Multiple Myeloma and Their Incorporation towards Risk-Adapted Treatment Strategy: Concise Clinical Review

    Directory of Open Access Journals (Sweden)

    Taiga Nishihori

    2017-01-01

    Full Text Available Although recent advances in novel treatment approaches and therapeutics have shifted the treatment landscape of multiple myeloma, it remains an incurable plasma cell malignancy. Growing knowledge of the genome and expressed genomic information characterizing the biologic behavior of multiple myeloma continues to accumulate. However, translation and incorporation of vast molecular understanding of complex tumor biology to deliver personalized and precision treatment to cure multiple myeloma have not been successful to date. Our review focuses on current evidence and understanding of myeloma biology with characterization in the context of genomic and molecular alterations. We also discuss future clinical application of the genomic and molecular knowledge, and more translational research is needed to benefit our myeloma patients.

  5. A code reviewer assignment model incorporating the competence differences and participant preferences

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2016-03-01

    Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.

  6. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    International Nuclear Information System (INIS)

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-01-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  7. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  8. Global dynamics of a PDE model for aedes aegypti mosquitoe incorporating female sexual preference

    KAUST Repository

    Parshad, Rana

    2011-01-01

    In this paper we study the long time dynamics of a reaction diffusion system, describing the spread of Aedes aegypti mosquitoes, which are the primary cause of dengue infection. The system incorporates a control attempt via the sterile insect technique. The model incorporates female mosquitoes sexual preference for wild males over sterile males. We show global existence of strong solution for the system. We then derive uniform estimates to prove the existence of a global attractor in L-2(Omega), for the system. The attractor is shown to be L-infinity(Omega) regular and posess state of extinction, if the injection of sterile males is large enough. We also provide upper bounds on the Hausdorff and fractal dimensions of the attractor.

  9. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    Science.gov (United States)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  10. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  11. New experimental model of multiple myeloma.

    Science.gov (United States)

    Telegin, G B; Kalinina, A R; Ponomarenko, N A; Ovsepyan, A A; Smirnov, S V; Tsybenko, V V; Homeriki, S G

    2001-06-01

    NSO/1 (P3x63Ay 8Ut) and SP20 myeloma cells were inoculated to BALB/c OlaHsd mice. NSO/1 cells allowed adequate stage-by-stage monitoring of tumor development. The adequacy of this model was confirmed in experiments with conventional cytostatics: prospidium and cytarabine caused necrosis of tumor cells and reduced animal mortality.

  12. Animal model of human disease. Multiple myeloma

    NARCIS (Netherlands)

    Radl, J.; Croese, J.W.; Zurcher, C.; Enden-Vieveen, M.H.M. van den; Leeuw, A.M. de

    1988-01-01

    Animal models of spontaneous and induced plasmacytomas in some inbred strains of mice have proven to be useful tools for different studies on tumorigenesis and immunoregulation. Their wide applicability and the fact that after their intravenous transplantation, the recipient mice developed bone

  13. Multiple Social Networks, Data Models and Measures for

    DEFF Research Database (Denmark)

    Magnani, Matteo; Rossi, Luca

    2017-01-01

    Multiple Social Network Analysis is a discipline defining models, measures, methodologies, and algorithms to study multiple social networks together as a single social system. It is particularly valuable when the networks are interconnected, e.g., the same actors are present in more than one...

  14. Modeling Rabbit Responses to Single and Multiple Aerosol ...

    Science.gov (United States)

    Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev

  15. Modeling water scarcity over south Asia: Incorporating crop growth and irrigation models into the Variable Infiltration Capacity (VIC) model

    Science.gov (United States)

    Troy, Tara J.; Ines, Amor V. M.; Lall, Upmanu; Robertson, Andrew W.

    2013-04-01

    Large-scale hydrologic models, such as the Variable Infiltration Capacity (VIC) model, are used for a variety of studies, from drought monitoring to projecting the potential impact of climate change on the hydrologic cycle decades in advance. The majority of these models simulates the natural hydrological cycle and neglects the effects of human activities such as irrigation, which can result in streamflow withdrawals and increased evapotranspiration. In some parts of the world, these activities do not significantly affect the hydrologic cycle, but this is not the case in south Asia where irrigated agriculture has a large water footprint. To address this gap, we incorporate a crop growth model and irrigation model into the VIC model in order to simulate the impacts of irrigated and rainfed agriculture on the hydrologic cycle over south Asia (Indus, Ganges, and Brahmaputra basin and peninsular India). The crop growth model responds to climate signals, including temperature and water stress, to simulate the growth of maize, wheat, rice, and millet. For the primarily rainfed maize crop, the crop growth model shows good correlation with observed All-India yields (0.7) with lower correlations for the irrigated wheat and rice crops (0.4). The difference in correlation is because irrigation provides a buffer against climate conditions, so that rainfed crop growth is more tied to climate than irrigated crop growth. The irrigation water demands induce hydrologic water stress in significant parts of the region, particularly in the Indus, with the streamflow unable to meet the irrigation demands. Although rainfall can vary significantly in south Asia, we find that water scarcity is largely chronic due to the irrigation demands rather than being intermittent due to climate variability.

  16. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  17. A data-driven model for influenza transmission incorporating media effects.

    Science.gov (United States)

    Mitchell, Lewis; Ross, Joshua V

    2016-10-01

    Numerous studies have attempted to model the effect of mass media on the transmission of diseases such as influenza; however, quantitative data on media engagement has until recently been difficult to obtain. With the recent explosion of 'big data' coming from online social media and the like, large volumes of data on a population's engagement with mass media during an epidemic are becoming available to researchers. In this study, we combine an online dataset comprising millions of shared messages relating to influenza with traditional surveillance data on flu activity to suggest a functional form for the relationship between the two. Using this data, we present a simple deterministic model for influenza dynamics incorporating media effects, and show that such a model helps explain the dynamics of historical influenza outbreaks. Furthermore, through model selection we show that the proposed media function fits historical data better than other media functions proposed in earlier studies.

  18. Towards a functional model of mental disorders incorporating the laws of thermodynamics.

    Science.gov (United States)

    Murray, George C; McKenzie, Karen

    2013-05-01

    The current paper presents the hypothesis that the understanding of mental disorders can be advanced by incorporating the laws of thermodynamics, specifically relating to energy conservation and energy transfer. These ideas, along with the introduction of the notion that entropic activities are symptomatic of inefficient energy transfer or disorder, were used to propose a model of understanding mental ill health as resulting from the interaction of entropy, capacity and work (environmental demands). The model was applied to Attention Deficit Hyperactivity Disorder, and was shown to be compatible with current thinking about this condition, as well as emerging models of mental disorders as complex networks. A key implication of the proposed model is that it argues that all mental disorders require a systemic functional approach, with the advantage that it offers a number of routes into the assessment, formulation and treatment for mental health problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Explaining clinical behaviors using multiple theoretical models

    OpenAIRE

    Eccles, Martin P; Grimshaw, Jeremy M; MacLennan, Graeme; Bonetti, Debbie; Glidewell, Liz; Pitts, Nigel B; Steen, Nick; Thomas, Ruth; Walker, Anne; Johnston, Marie

    2012-01-01

    Abstract Background In the field of implementation research, there is an increased interest in use of theory when designing implementation research studies involving behavior change. In 2003, we initiated a series of five studies to establish a scientific rationale for interventions to translate research findings into clinical practice by exploring the performance of a number of different, commonly used, overlapping behavioral theories and models. We reflect on the strengths and weaknesses of...

  20. Airport choice model in multiple airport regions

    Directory of Open Access Journals (Sweden)

    Claudia Muñoz

    2017-02-01

    Full Text Available Purpose: This study aims to analyze travel choices made by air transportation users in multi airport regions because it is a crucial component when planning passenger redistribution policies. The purpose of this study is to find a utility function which makes it possible to know the variables that influence users’ choice of the airports on routes to the main cities in the Colombian territory. Design/methodology/approach: This research generates a Multinomial Logit Model (MNL, which is based on the theory of maximizing utility, and it is based on the data obtained on revealed and stated preference surveys applied to users who reside in the metropolitan area of Aburrá Valley (Colombia. This zone is the only one in the Colombian territory which has two neighboring airports for domestic flights. The airports included in the modeling process were Enrique Olaya Herrera (EOH Airport and José María Córdova (JMC Airport. Several structure models were tested, and the MNL proved to be the most significant revealing the common variables that affect passenger airport choice include the airfare, the price to travel the airport, and the time to get to the airport. Findings and Originality/value: The airport choice model which was calibrated corresponds to a valid powerful tool used to calculate the probability of each analyzed airport of being chosen for domestic flights in the Colombian territory. This is done bearing in mind specific characteristic of each of the attributes contained in the utility function. In addition, these probabilities will be used to calculate future market shares of the two airports considered in this study, and this will be done generating a support tool for airport and airline marketing policies.

  1. Multiple simultaneous event model for radiation carcinogenesis

    International Nuclear Information System (INIS)

    Baum, J.W.

    1976-01-01

    A mathematical model is proposed which postulates that cancer induction is a multi-event process, that these events occur naturally, usually one at a time in any cell, and that radiation frequently causes two of these events to occur simultaneously. Microdosimetric considerations dictate that for high LET radiations the simultaneous events are associated with a single particle or track. The model predicts: (a) linear dose-effect relations for early times after irradiation with small doses, (b) approximate power functions of dose (i.e. Dsup(x)) having exponent less than one for populations of mixed age examined at short times after irradiation with small doses, (c) saturation of effect at either long times after irradiation with small doses or for all times after irradiation with large doses, and (d) a net increase in incidence which is dependent on age of observation but independent of age at irradiation. Data of Vogel, for neutron induced mammary tumors in rats, are used to illustrate the validity of the formulation. This model provides a quantitative framework to explain several unexpected results obtained by Vogel. It also provides a logical framework to explain the dose-effect relations observed in the Japanese survivors of the atomic bombs. (author)

  2. Incorporating ligament laxity in a finite element model for the upper cervical spine.

    Science.gov (United States)

    Lasswell, Timothy L; Cronin, Duane S; Medley, John B; Rasoulinejad, Parham

    2017-11-01

    Predicting physiological range of motion (ROM) using a finite element (FE) model of the upper cervical spine requires the incorporation of ligament laxity. The effect of ligament laxity can be observed only on a macro level of joint motion and is lost once ligaments have been dissected and preconditioned for experimental testing. As a result, although ligament laxity values are recognized to exist, specific values are not directly available in the literature for use in FE models. The purpose of the current study is to propose an optimization process that can be used to determine a set of ligament laxity values for upper cervical spine FE models. Furthermore, an FE model that includes ligament laxity is applied, and the resulting ROM values are compared with experimental data for physiological ROM, as well as experimental data for the increase in ROM when a Type II odontoid fracture is introduced. The upper cervical spine FE model was adapted from a 50th percentile male full-body model developed with the Global Human Body Models Consortium (GHBMC). FE modeling was performed in LS-DYNA and LS-OPT (Livermore Software Technology Group) was used for ligament laxity optimization. Ordinate-based curve matching was used to minimize the mean squared error (MSE) between computed load-rotation curves and experimental load-rotation curves under flexion, extension, and axial rotation with pure moment loads from 0 to 3.5 Nm. Lateral bending was excluded from the optimization because the upper cervical spine was considered to be primarily responsible for flexion, extension, and axial rotation. Based on recommendations from the literature, four varying inputs representing laxity in select ligaments were optimized to minimize the MSE. Funding was provided by the Natural Sciences and Engineering Research Council of Canada as well as GHMBC. The present study was funded by the Natural Sciences and Engineering Research Council of Canada to support the work of one graduate student

  3. Incorporating microbiota data into epidemiologic models: examples from vaginal microbiota research.

    Science.gov (United States)

    van de Wijgert, Janneke H; Jespers, Vicky

    2016-05-01

    Next generation sequencing and quantitative polymerase chain reaction technologies are now widely available, and research incorporating these methods is growing exponentially. In the vaginal microbiota (VMB) field, most research to date has been descriptive. The purpose of this article is to provide an overview of different ways in which next generation sequencing and quantitative polymerase chain reaction data can be used to answer clinical epidemiologic research questions using examples from VMB research. We reviewed relevant methodological literature and VMB articles (published between 2008 and 2015) that incorporated these methodologies. VMB data have been analyzed using ecologic methods, methods that compare the presence or relative abundance of individual taxa or community compositions between different groups of women or sampling time points, and methods that first reduce the complexity of the data into a few variables followed by the incorporation of these variables into traditional biostatistical models. To make future VMB research more clinically relevant (such as studying associations between VMB compositions and clinical outcomes and the effects of interventions on the VMB), it is important that these methods are integrated with rigorous epidemiologic methods (such as appropriate study designs, sampling strategies, and adjustment for confounding). Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  4. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  5. Procurement-distribution model for perishable items with quantity discounts incorporating freight policies under fuzzy environment

    Directory of Open Access Journals (Sweden)

    Makkar Sandhya

    2013-01-01

    Full Text Available A significant issue of the supply chain problem is how to integrate different entities. Managing supply chain is a difficult task because of complex integrations, especially when the products are perishable in nature. Little attention has been paid on ordering specific perishable products jointly in uncertain environment with multiple sources and multiple destinations. In this article, we propose a supply chain coordination model through quantity and freight discount policy for perishable products under uncertain cost and demand information. A case is provided to validate the procedure.

  6. Entrepreneurial intention modeling using hierarchical multiple regression

    Directory of Open Access Journals (Sweden)

    Marina Jeger

    2014-12-01

    Full Text Available The goal of this study is to identify the contribution of effectuation dimensions to the predictive power of the entrepreneurial intention model over and above that which can be accounted for by other predictors selected and confirmed in previous studies. As is often the case in social and behavioral studies, some variables are likely to be highly correlated with each other. Therefore, the relative amount of variance in the criterion variable explained by each of the predictors depends on several factors such as the order of variable entry and sample specifics. The results show the modest predictive power of two dimensions of effectuation prior to the introduction of the theory of planned behavior elements. The article highlights the main advantages of applying hierarchical regression in social sciences as well as in the specific context of entrepreneurial intention formation, and addresses some of the potential pitfalls that this type of analysis entails.

  7. Teaching Genetic Counseling Skills: Incorporating a Genetic Counseling Adaptation Continuum Model to Address Psychosocial Complexity.

    Science.gov (United States)

    Shugar, Andrea

    2017-04-01

    Genetic counselors are trained health care professionals who effectively integrate both psychosocial counseling and information-giving into their practice. Preparing genetic counseling students for clinical practice is a challenging task, particularly when helping them develop effective and active counseling skills. Resistance to incorporating these skills may stem from decreased confidence, fear of causing harm or a lack of clarity of psycho-social goals. The author reflects on the personal challenges experienced in teaching genetic counselling students to work with psychological and social complexity, and proposes a Genetic Counseling Adaptation Continuum model and methodology to guide students in the use of advanced counseling skills.

  8. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  9. Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms.

    Science.gov (United States)

    Nguyen, Thang Tat; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E; Lee, Choonsik; Chung, Beom Sun

    2015-11-21

    The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.

  10. VNM: An R Package for Finding Multiple-Objective Optimal Designs for the 4-Parameter Logistic Model

    OpenAIRE

    Hyun, Seung Won; Wong, Weng Kee; Yang, Yarong

    2018-01-01

    A multiple-objective optimal design is useful for dose-response studies because it can incorporate several objectives at the design stage. Objectives can be of varying interests and a properly constructed multiple-objective optimal design can provide user-specified efficiencies, delivering higher efficiencies for the more important objectives. In this work, we introduce the VNM package written in R for finding 3-objective locally optimal designs for the 4-parameter logistic (4PL) model widely...

  11. Model for incorporating fuel swelling and clad shrinkage effects in diffusion theory calculations (LWBR Development Program)

    International Nuclear Information System (INIS)

    Schick, W.C. Jr.; Milani, S.; Duncombe, E.

    1980-03-01

    A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model

  12. A Fibrocontractive Mechanochemical Model of Dermal Wound Closure Incorporating Realistic Growth Factor Kinetics

    KAUST Repository

    Murphy, Kelly E.

    2012-01-13

    Fibroblasts and their activated phenotype, myofibroblasts, are the primary cell types involved in the contraction associated with dermal wound healing. Recent experimental evidence indicates that the transformation from fibroblasts to myofibroblasts involves two distinct processes: The cells are stimulated to change phenotype by the combined actions of transforming growth factor β (TGFβ) and mechanical tension. This observation indicates a need for a detailed exploration of the effect of the strong interactions between the mechanical changes and growth factors in dermal wound healing. We review the experimental findings in detail and develop a model of dermal wound healing that incorporates these phenomena. Our model includes the interactions between TGFβ and collagenase, providing a more biologically realistic form for the growth factor kinetics than those included in previous mechanochemical descriptions. A comparison is made between the model predictions and experimental data on human dermal wound healing and all the essential features are well matched. © 2012 Society for Mathematical Biology.

  13. Developing Baltic cod recruitment models II : Incorporation of environmental variability and species interaction

    DEFF Research Database (Denmark)

    Köster, Fritz; Hinrichsen, H.H.; St. John, Michael

    2001-01-01

    We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... affecting survival of eggs, predation by clupeids on eggs, larval transport, and cannibalism. Results showed that recruitment in the most important spawning area, the Bornholm Basin, during 1976-1995 was related to egg production; however, other factors affecting survival of the eggs (oxygen conditions......, predation) were also significant and when incorporated explained 69% of the variation in 0-group recruitment. In other spawning areas, variable hydrographic conditions did not allow for regular successful egg development. Hence, relatively simple models proved sufficient to predict recruitment of 0-group...

  14. Incorporating excitation-induced dephasing into the Maxwell-Bloch numerical modeling of photon echoes

    International Nuclear Information System (INIS)

    Burr, G.W.; Harris, Todd L.; Babbitt, Wm. Randall; Jefferson, C. Michael

    2004-01-01

    We describe the incorporation of excitation-induced dephasing (EID) into the Maxwell-Bloch numerical simulation of photon echoes. At each time step of the usual numerical integration, stochastic frequency jumps of ions--caused by excitation of neighboring ions--is modeled by convolving each Bloch vector with the Bloch vectors of nearby frequency detunings. The width of this convolution kernel follows the instantaneous change in overall population, integrated over the simulated bandwidth. This approach is validated by extensive comparison against published and original experimental results. The enhanced numerical model is then used to investigate the accuracy of experiments designed to extrapolate to the intrinsic dephasing time T 2 from data taken in the presence of EID. Such a modeling capability offers improved understanding of experimental results, and should allow quantitative analysis of engineering tradeoffs in realistic optical coherent transient applications

  15. Affordances perspective and grammaticalization: Incorporation of language, environment and users in the model of semantic paths

    Directory of Open Access Journals (Sweden)

    Alexander Andrason

    2015-12-01

    Full Text Available The present paper demonstrates that insights from the affordances perspective can contribute to developing a more comprehensive model of grammaticalization. The authors argue that the grammaticalization process is afforded differently depending on the values of three contributing parameters: the factor (schematized as a qualitative-quantitative map or a wave of a gram, environment (understood as the structure of the stream along which the gram travels, and actor (narrowed to certain cognitive-epistemological capacities of the users, in particular to the fact of being a native speaker. By relating grammaticalization to these three parameters and by connecting it to the theory of optimization, the proposed model offers a better approximation to realistic cases of grammaticalization: The actor and environment are overtly incorporated into the model and divergences from canonical grammaticalization paths are both tolerated and explicable.

  16. A Fibrocontractive Mechanochemical Model of Dermal Wound Closure Incorporating Realistic Growth Factor Kinetics

    KAUST Repository

    Murphy, Kelly E.; Hall, Cameron L.; Maini, Philip K.; McCue, Scott W.; McElwain, D. L. Sean

    2012-01-01

    Fibroblasts and their activated phenotype, myofibroblasts, are the primary cell types involved in the contraction associated with dermal wound healing. Recent experimental evidence indicates that the transformation from fibroblasts to myofibroblasts involves two distinct processes: The cells are stimulated to change phenotype by the combined actions of transforming growth factor β (TGFβ) and mechanical tension. This observation indicates a need for a detailed exploration of the effect of the strong interactions between the mechanical changes and growth factors in dermal wound healing. We review the experimental findings in detail and develop a model of dermal wound healing that incorporates these phenomena. Our model includes the interactions between TGFβ and collagenase, providing a more biologically realistic form for the growth factor kinetics than those included in previous mechanochemical descriptions. A comparison is made between the model predictions and experimental data on human dermal wound healing and all the essential features are well matched. © 2012 Society for Mathematical Biology.

  17. Incorporating Yearly Derived Winter Wheat Maps Into Winter Wheat Yield Forecasting Model

    Science.gov (United States)

    Skakun, S.; Franch, B.; Roger, J.-C.; Vermote, E.; Becker-Reshef, I.; Justice, C.; Santamaría-Artigas, A.

    2016-01-01

    Wheat is one of the most important cereal crops in the world. Timely and accurate forecast of wheat yield and production at global scale is vital in implementing food security policy. Becker-Reshef et al. (2010) developed a generalized empirical model for forecasting winter wheat production using remote sensing data and official statistics. This model was implemented using static wheat maps. In this paper, we analyze the impact of incorporating yearly wheat masks into the forecasting model. We propose a new approach of producing in season winter wheat maps exploiting satellite data and official statistics on crop area only. Validation on independent data showed that the proposed approach reached 6% to 23% of omission error and 10% to 16% of commission error when mapping winter wheat 2-3 months before harvest. In general, we found a limited impact of using yearly winter wheat masks over a static mask for the study regions.

  18. Some considerations concerning the challenge of incorporating social variables into epidemiological models of infectious disease transmission.

    Science.gov (United States)

    Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet

    2015-01-01

    Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.

  19. Are adverse effects incorporated in economic models? An initial review of current practice.

    Science.gov (United States)

    Craig, D; McDaid, C; Fonseca, T; Stock, C; Duffy, S; Woolacott, N

    2009-12-01

    To identify methodological research on the incorporation of adverse effects in economic models and to review current practice. Major electronic databases (Cochrane Methodology Register, Health Economic Evaluations Database, NHS Economic Evaluation Database, EconLit, EMBASE, Health Management Information Consortium, IDEAS, MEDLINE and Science Citation Index) were searched from inception to September 2007. Health technology assessment (HTA) reports commissioned by the National Institute for Health Research (NIHR) HTA programme and published between 2004 and 2007 were also reviewed. The reviews of methodological research on the inclusion of adverse effects in decision models and of current practice were carried out according to standard methods. Data were summarised in a narrative synthesis. Of the 719 potentially relevant references in the methodological research review, five met the inclusion criteria; however, they contained little information of direct relevance to the incorporation of adverse effects in models. Of the 194 HTA monographs published from 2004 to 2007, 80 were reviewed, covering a range of research and therapeutic areas. In total, 85% of the reports included adverse effects in the clinical effectiveness review and 54% of the decision models included adverse effects in the model; 49% included adverse effects in the clinical review and model. The link between adverse effects in the clinical review and model was generally weak; only 3/80 (manipulation. Of the models including adverse effects, 67% used a clinical adverse effects parameter, 79% used a cost of adverse effects parameter, 86% used one of these and 60% used both. Most models (83%) used utilities, but only two (2.5%) used solely utilities to incorporate adverse effects and were explicit that the utility captured relevant adverse effects; 53% of those models that included utilities derived them from patients on treatment and could therefore be interpreted as capturing adverse effects. In total

  20. A realistic closed-form radiobiological model of clinical tumor-control data incorporating intertumor heterogeneity

    International Nuclear Information System (INIS)

    Roberts, Stephen A.; Hendry, Jolyon H.

    1998-01-01

    Purpose: To investigate the role of intertumor heterogeneity in clinical tumor control datasets and the relationship to in vitro measurements of tumor biopsy samples. Specifically, to develop a modified linear-quadratic (LQ) model incorporating such heterogeneity that it is practical to fit to clinical tumor-control datasets. Methods and Materials: We developed a modified version of the linear-quadratic (LQ) model for tumor control, incorporating a (lagged) time factor to allow for tumor cell repopulation. We explicitly took into account the interpatient heterogeneity in clonogen number, radiosensitivity, and repopulation rate. Using this model, we could generate realistic TCP curves using parameter estimates consistent with those reported from in vitro studies, subject to the inclusion of a radiosensitivity (or dose)-modifying factor. We then demonstrated that the model was dominated by the heterogeneity in α (tumor radiosensitivity) and derived an approximate simplified model incorporating this heterogeneity. This simplified model is expressible in a compact closed form, which it is practical to fit to clinical datasets. Using two previously analysed datasets, we fit the model using direct maximum-likelihood techniques and obtained parameter estimates that were, again, consistent with the experimental data on the radiosensitivity of primary human tumor cells. This heterogeneity model includes the same number of adjustable parameters as the standard LQ model. Results: The modified model provides parameter estimates that can easily be reconciled with the in vitro measurements. The simplified (approximate) form of the heterogeneity model is a compact, closed-form probit function that can readily be fitted to clinical series by conventional maximum-likelihood methodology. This heterogeneity model provides a slightly better fit to the datasets than the conventional LQ model, with the same numbers of fitted parameters. The parameter estimates of the clinically

  1. Multiple resonant absorber with prism-incorporated graphene and one-dimensional photonic crystals in the visible and near-infrared spectral range

    Science.gov (United States)

    Zou, X. J.; Zheng, G. G.; Chen, Y. Y.; Xu, L. H.; Lai, M.

    2018-04-01

    A multi-band absorber constructed from prism-incorporated one-dimensional photonic crystal (1D-PhC) containing graphene defects is achieved theoretically in the visible and near-infrared (vis-NIR) spectral range. By means of the transfer matrix method (TMM), the effect of structural parameters on the optical response of the structure has been investigated. It is possible to achieve multi-peak and complete optical absorption. The simulations reveal that the light intensity is enhanced at the graphene plane, and the resonant wavelength and the absorption intensity can also be tuned by tilting the incidence angle of the impinging light. In particular, multiple graphene sheets are embedded in the arrays, without any demand of manufacture process to cut them into periodic patterns. The proposed concept can be extended to other two-dimensional (2D) materials and engineered for promising applications, including selective or multiplex filters, multiple channel sensors, and photodetectors.

  2. Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5

    Directory of Open Access Journals (Sweden)

    D. Wang

    2017-07-01

    Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.

  3. Correlations in multiple production on nuclei and Glauber model of multiple scattering

    International Nuclear Information System (INIS)

    Zoller, V.R.; Nikolaev, N.N.

    1982-01-01

    Critical analysis of possibility for describing correlation phenomena during multiple production on nuclei within the framework of the Glauber multiple seattering model generalized for particle production processes with Capella, Krziwinski and Shabelsky has been performed. It was mainly concluded that the suggested generalization of the Glauber model gives dependences on Ng(Np) (where Ng-the number of ''grey'' tracess, and Np-the number of protons flying out of nucleus) and, eventually, on #betta# (where #betta#-the number of intranuclear interactions) contradicting experience. Independent of choice of relation between #betta# and Ng(Np) in the model the rapidity corrletor Rsub(eta) is overstated in the central region and understated in the region of nucleus fragmentation. In mean multiplicities these two contradictions of experience are disguised with random compensation and agreement with experience in Nsub(S) (function of Ng) cannot be an argument in favour of the model. It is concluded that eiconal model doesn't permit to quantitatively describe correlation phenomena during the multiple production on nuclei

  4. A constitutive mechanical model for gas hydrate bearing sediments incorporating inelastic mechanisms

    KAUST Repository

    Sánchez, Marcelo

    2016-11-30

    Gas hydrate bearing sediments (HBS) are natural soils formed in permafrost and sub-marine settings where the temperature and pressure conditions are such that gas hydrates are stable. If these conditions shift from the hydrate stability zone, hydrates dissociate and move from the solid to the gas phase. Hydrate dissociation is accompanied by significant changes in sediment structure and strongly affects its mechanical behavior (e.g., sediment stiffenss, strength and dilatancy). The mechanical behavior of HBS is very complex and its modeling poses great challenges. This paper presents a new geomechanical model for hydrate bearing sediments. The model incorporates the concept of partition stress, plus a number of inelastic mechanisms proposed to capture the complex behavior of this type of soil. This constitutive model is especially well suited to simulate the behavior of HBS upon dissociation. The model was applied and validated against experimental data from triaxial and oedometric tests conducted on manufactured and natural specimens involving different hydrate saturation, hydrate morphology, and confinement conditions. Particular attention was paid to model the HBS behavior during hydrate dissociation under loading. The model performance was highly satisfactory in all the cases studied. It managed to properly capture the main features of HBS mechanical behavior and it also assisted to interpret the behavior of this type of sediment under different loading and hydrate conditions.

  5. Incorporating vehicle mix in stimulus-response car-following models

    Directory of Open Access Journals (Sweden)

    Saidi Siuhi

    2016-06-01

    Full Text Available The objective of this paper is to incorporate vehicle mix in stimulus-response car-following models. Separate models were estimated for acceleration and deceleration responses to account for vehicle mix via both movement state and vehicle type. For each model, three sub-models were developed for different pairs of following vehicles including “automobile following automobile,” “automobile following truck,” and “truck following automobile.” The estimated model parameters were then validated against other data from a similar region and roadway. The results indicated that drivers' behaviors were significantly different among the different pairs of following vehicles. Also the magnitude of the estimated parameters depends on the type of vehicle being driven and/or followed. These results demonstrated the need to use separate models depending on movement state and vehicle type. The differences in parameter estimates confirmed in this paper highlight traffic safety and operational issues of mixed traffic operation on a single lane. The findings of this paper can assist transportation professionals to improve traffic simulation models used to evaluate the impact of different strategies on ameliorate safety and performance of highways. In addition, driver response time lag estimates can be used in roadway design to calculate important design parameters such as stopping sight distance on horizontal and vertical curves for both automobiles and trucks.

  6. Fuzzy Logic-Based Model That Incorporates Personality Traits for Heterogeneous Pedestrians

    Directory of Open Access Journals (Sweden)

    Zhuxin Xue

    2017-10-01

    Full Text Available Most models designed to simulate pedestrian dynamical behavior are based on the assumption that human decision-making can be described using precise values. This study proposes a new pedestrian model that incorporates fuzzy logic theory into a multi-agent system to address cognitive behavior that introduces uncertainty and imprecision during decision-making. We present a concept of decision preferences to represent the intrinsic control factors of decision-making. To realize the different decision preferences of heterogeneous pedestrians, the Five-Factor (OCEAN personality model is introduced to model the psychological characteristics of individuals. Then, a fuzzy logic-based approach is adopted for mapping the relationships between the personality traits and the decision preferences. Finally, we have developed an application using our model to simulate pedestrian dynamical behavior in several normal or non-panic scenarios, including a single-exit room, a hallway with obstacles, and a narrowing passage. The effectiveness of the proposed model is validated with a user study. The results show that the proposed model can generate more reasonable and heterogeneous behavior in the simulation and indicate that individual personality has a noticeable effect on pedestrian dynamical behavior.

  7. Discontinuous Galerkin Time-Domain Modeling of Graphene Nano-Ribbon Incorporating the Spatial Dispersion Effects

    KAUST Repository

    Li, Ping; Jiang, Li Jun; Bagci, Hakan

    2018-01-01

    It is well known that graphene demonstrates spatial dispersion properties, i.e., its conductivity is nonlocal and a function of spectral wave number (momentum operator) q. In this paper, to account for effects of spatial dispersion on transmission of high speed signals along graphene nano-ribbon (GNR) interconnects, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed. The atomically-thick GNR is modeled using a nonlocal transparent surface impedance boundary condition (SIBC) incorporated into the DGTD scheme. Since the conductivity is a complicated function of q (and one cannot find an analytical Fourier transform pair between q and spatial differential operators), an exact time domain SIBC model cannot be derived. To overcome this problem, the conductivity is approximated by its Taylor series in spectral domain under low-q assumption. This approach permits expressing the time domain SIBC in the form of a second-order partial differential equation (PDE) in current density and electric field intensity. To permit easy incorporation of this PDE with the DGTD algorithm, three auxiliary variables, which degenerate the second-order (temporal and spatial) differential operators to first-order ones, are introduced. Regarding to the temporal dispersion effects, the auxiliary differential equation (ADE) method is utilized to eliminates the expensive temporal convolutions. To demonstrate the applicability of the proposed scheme, numerical results, which involve characterization of spatial dispersion effects on the transfer impedance matrix of GNR interconnects, are presented.

  8. Discontinuous Galerkin Time-Domain Modeling of Graphene Nano-Ribbon Incorporating the Spatial Dispersion Effects

    KAUST Repository

    Li, Ping

    2018-04-13

    It is well known that graphene demonstrates spatial dispersion properties, i.e., its conductivity is nonlocal and a function of spectral wave number (momentum operator) q. In this paper, to account for effects of spatial dispersion on transmission of high speed signals along graphene nano-ribbon (GNR) interconnects, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed. The atomically-thick GNR is modeled using a nonlocal transparent surface impedance boundary condition (SIBC) incorporated into the DGTD scheme. Since the conductivity is a complicated function of q (and one cannot find an analytical Fourier transform pair between q and spatial differential operators), an exact time domain SIBC model cannot be derived. To overcome this problem, the conductivity is approximated by its Taylor series in spectral domain under low-q assumption. This approach permits expressing the time domain SIBC in the form of a second-order partial differential equation (PDE) in current density and electric field intensity. To permit easy incorporation of this PDE with the DGTD algorithm, three auxiliary variables, which degenerate the second-order (temporal and spatial) differential operators to first-order ones, are introduced. Regarding to the temporal dispersion effects, the auxiliary differential equation (ADE) method is utilized to eliminates the expensive temporal convolutions. To demonstrate the applicability of the proposed scheme, numerical results, which involve characterization of spatial dispersion effects on the transfer impedance matrix of GNR interconnects, are presented.

  9. Incorporating networks in a probabilistic graphical model to find drivers for complex human diseases.

    Science.gov (United States)

    Mezlini, Aziz M; Goldenberg, Anna

    2017-10-01

    Discovering genetic mechanisms driving complex diseases is a hard problem. Existing methods often lack power to identify the set of responsible genes. Protein-protein interaction networks have been shown to boost power when detecting gene-disease associations. We introduce a Bayesian framework, Conflux, to find disease associated genes from exome sequencing data using networks as a prior. There are two main advantages to using networks within a probabilistic graphical model. First, networks are noisy and incomplete, a substantial impediment to gene discovery. Incorporating networks into the structure of a probabilistic models for gene inference has less impact on the solution than relying on the noisy network structure directly. Second, using a Bayesian framework we can keep track of the uncertainty of each gene being associated with the phenotype rather than returning a fixed list of genes. We first show that using networks clearly improves gene detection compared to individual gene testing. We then show consistently improved performance of Conflux compared to the state-of-the-art diffusion network-based method Hotnet2 and a variety of other network and variant aggregation methods, using randomly generated and literature-reported gene sets. We test Hotnet2 and Conflux on several network configurations to reveal biases and patterns of false positives and false negatives in each case. Our experiments show that our novel Bayesian framework Conflux incorporates many of the advantages of the current state-of-the-art methods, while offering more flexibility and improved power in many gene-disease association scenarios.

  10. Advanced Methods for Incorporating Solar Energy Technologies into Electric Sector Capacity-Expansion Models: Literature Review and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, P.; Eurek, K.; Margolis, R.

    2014-07-01

    Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.

  11. Improved double-multiple streamtube model for the Darrieus-type vertical axis wind turbine

    Science.gov (United States)

    Berg, D. E.

    Double streamtube codes model the curved blade (Darrieus-type) vertical axis wind turbine (VAWT) as a double actuator fish arrangement (one half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. Sandia National Laboratories developed a double multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. The theory underlying this VAWT model is described, as well as the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double multiple streamtube and a vortex filament code. The effects of neglecting dynamic stall and horizontal wind velocity distribution are also illustrated.

  12. A comparison of approaches for simultaneous inference of fixed effects for multiple outcomes using linear mixed models

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2018-01-01

    Longitudinal studies with multiple outcomes often pose challenges for the statistical analysis. A joint model including all outcomes has the advantage of incorporating the simultaneous behavior but is often difficult to fit due to computational challenges. We consider 2 alternative approaches to ......, pairwise fitting shows a larger loss in efficiency than the marginal models approach. Using an alternative to the joint modelling strategy will lead to some but not necessarily a large loss of efficiency for small sample sizes....

  13. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  14. Sustainability Instruction in High Doses: Results From Incorporation of Multiple InTeGrate Modules Into an Environmental Science Class

    Science.gov (United States)

    Rademacher, L. K.

    2017-12-01

    The Interdisciplinary Teaching about Earth for a Sustainable Future (InTeGrate) community has developed extensive courses and modules designed for broad adoption into geoscience classrooms in diverse environments. I participated in a three-semester research project designed to test the efficacy of incorporating "high doses" (minimum 3 modules or 18 class periods) of InTeGrate materials into a course, in my case, an introductory environmental science class. InTeGrate materials were developed by groups of instructors from a range of institutions across the US. These materials include an emphasis on systems thinking, interdisciplinary approaches, and sustainability, and those themes are woven throughout the modules. The three semesters included a control in which no InTeGrate materials were used, a pilot in which InTeGrate materials were tested, and a treatment semesters in which tested materials were modified as needed and fully implemented into the course. Data were collected each semester on student attitudes using the InTeGrate Attitudinal Instrument (pre and post), a subset of Geoscience Literacy Exam questions (pre and post), and a series of assessments and essay exam questions (post only). Although results suggest that learning gains were mixed, changes in attitudes pre- and post-instruction were substantial. Changes in attitudes regarding the importance of sustainable employers, the frequency of self-reported individual sustainable actions, and motivation level for creating a sustainable society were observed in the control and treatment semesters, with the treatment semester showing the greatest gains. Importantly, one of the biggest differences between the control and treatment semesters is the reported impact that the course had on influencing students' sustainable behaviors. The treatment semester course impacted students' sustainable behaviors far more than the control semester.

  15. Energy system investment model incorporating heat pumps with thermal storage in buildings and buffer tanks

    DEFF Research Database (Denmark)

    Hedegaard, Karsten; Balyk, Olexandr

    2013-01-01

    Individual compression heat pumps constitute a potentially valuable resource in supporting wind power integration due to their economic competitiveness and possibilities for flexible operation. When analysing the system benefits of flexible heat pump operation, effects on investments should...... be taken into account. In this study, we present a model that facilitates analysing individual heat pumps and complementing heat storages in integration with the energy system, while optimising both investments and operation. The model incorporates thermal building dynamics and covers various heat storage...... of operating heat pumps flexibly. This includes prioritising heat pump operation for hours with low marginal electricity production costs, and peak load shaving resulting in a reduced need for peak and reserve capacity investments....

  16. Modelling and Simulation of a Manipulator with Stable Viscoelastic Grasping Incorporating Friction

    Directory of Open Access Journals (Sweden)

    A. Khurshid

    2016-12-01

    Full Text Available Design, dynamics and control of a humanoid robotic hand based on anthropological dimensions, with joint friction, is modelled, simulated and analysed in this paper by using computer aided design and multibody dynamic simulation. Combined joint friction model is incorporated in the joints. Experimental values of coefficient of friction of grease lubricated sliding contacts representative of manipulator joints are presented. Human fingers deform to the shape of the grasped object (enveloping grasp at the area of interaction. A mass-spring-damper model of the grasp is developed. The interaction of the viscoelastic gripper of the arm with objects is analysed by using Bond Graph modelling method. Simulations were conducted for several material parameters. These results of the simulation are then used to develop a prototype of the proposed gripper. Bond graph model is experimentally validated by using the prototype. The gripper is used to successfully transport soft and fragile objects. This paper provides information on optimisation of friction and its inclusion in both dynamic modelling and simulation to enhance mechanical efficiency.

  17. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  18. Adaptive Active Noise Suppression Using Multiple Model Switching Strategy

    Directory of Open Access Journals (Sweden)

    Quanzhen Huang

    2017-01-01

    Full Text Available Active noise suppression for applications where the system response varies with time is a difficult problem. The computation burden for the existing control algorithms with online identification is heavy and easy to cause control system instability. A new active noise control algorithm is proposed in this paper by employing multiple model switching strategy for secondary path varying. The computation is significantly reduced. Firstly, a noise control system modeling method is proposed for duct-like applications. Then a multiple model adaptive control algorithm is proposed with a new multiple model switching strategy based on filter-u least mean square (FULMS algorithm. Finally, the proposed algorithm was implemented on Texas Instruments digital signal processor (DSP TMS320F28335 and real time experiments were done to test the proposed algorithm and FULMS algorithm with online identification. Experimental verification tests show that the proposed algorithm is effective with good noise suppression performance.

  19. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  20. A model to incorporate organ deformation in the evaluation of dose/volume relationship

    International Nuclear Information System (INIS)

    Yan, D.; Jaffray, D.; Wong, J.; Brabbins, D.; Martinez, A. A.

    1997-01-01

    Purpose: Measurements of internal organ motion have demonstrated that daily organ deformation exists during the course of radiation treatment. However, a model to evaluate the resultant dose delivered to a daily deformed organ remains a difficult challenge. Current methods which model such organ deformation as rigid body motion in the dose calculation for treatment planning evaluation are incorrect and misleading. In this study, a new model for treatment planning evaluation is introduced which incorporates patient specific information of daily organ deformation and setup variation. The model was also used to retrospectively analyze the actual treatment data measured using daily CT scans for 5 patients with prostate treatment. Methods and Materials: The model assumes that for each patient, the organ of interest can be measured during the first few treatment days. First, the volume of each organ is delineated from each of the daily measurements and cumulated in a 3D bit-map. A tissue occupancy distribution is then constructed with the 50% isodensity representing the mean, or effective, organ volume. During the course of treatment, each voxel in the effective organ volume is assumed to move inside a local 3D neighborhood with a specific distribution function. The neighborhood and the distribution function are deduced from the positions and shapes of the organ in the first few measurements using the biomechanics model of viscoelastic body. For each voxel, the local distribution function is then convolved with the spatial dose distribution. The latter includes also the variation in dose due to daily setup error. As a result, the cumulative dose to the voxel incorporates the effects of daily setup variation and organ deformation. A ''variation adjusted'' dose volume histogram, aDVH, for the effective organ volume can then be constructed for the purpose of treatment evaluation and optimization. Up to 20 daily CT scans and daily portal images for 5 patients with prostate

  1. Incorporating time-delays in S-System model for reverse engineering genetic networks.

    Science.gov (United States)

    Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan

    2013-06-18

    In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in

  2. Model Seleksi Premi Asuransi Jiwa Dwiguna untuk Kasus Multiple Decrement

    OpenAIRE

    Cita, Devi Ramana; Pane, Rolan; ', Harison

    2015-01-01

    This article discusses a select survival model for the case of multiple decrements in evaluating endowment life insurance premium for person currently aged ( + ) years, who is selected at age with ℎ years selection period. The case of multiple decrements in this case is limited to two cases. The calculation of the annual premium is done by prior evaluating of the single premium, and the present value of annuity depends on theconstant force assumption.

  3. An improved analytical model of 4H-SiC MESFET incorporating bulk and interface trapping effects

    Science.gov (United States)

    Hema Lata Rao, M.; Narasimha Murty, N. V. L.

    2015-01-01

    An improved analytical model for the current—voltage (I-V) characteristics of the 4H-SiC metal semiconductor field effect transistor (MESFET) on a high purity semi-insulating (HPSI) substrate with trapping and thermal effects is presented. The 4H-SiC MESFET structure includes a stack of HPSI substrates and a uniformly doped channel layer. The trapping effects include both the effect of multiple deep-level traps in the substrate and surface traps between the gate to source/drain. The self-heating effects are also incorporated to obtain the accurate and realistic nature of the analytical model. The importance of the proposed model is emphasised through the inclusion of the recent and exact nature of the traps in the 4H-SiC HPSI substrate responsible for substrate compensation. The analytical model is used to exhibit DC I-V characteristics of the device with and without trapping and thermal effects. From the results, the current degradation is observed due to the surface and substrate trapping effects and the negative conductance introduced by the self-heating effect at a high drain voltage. The calculated results are compared with reported experimental and two-dimensional simulations (Silvaco®-TCAD). The proposed model also illustrates the effectiveness of the gate—source distance scaling effect compared to the gate—drain scaling effect in optimizing 4H-SiC MESFET performance. Results demonstrate that the proposed I-V model of 4H-SiC MESFET is suitable for realizing SiC based monolithic circuits (MMICs) on HPSI substrates.

  4. An improved analytical model of 4H-SiC MESFET incorporating bulk and interface trapping effects

    International Nuclear Information System (INIS)

    Rao, M. Hema Lata; Murty, N. V. L. Narasimha

    2015-01-01

    An improved analytical model for the current—voltage (I–V) characteristics of the 4H-SiC metal semiconductor field effect transistor (MESFET) on a high purity semi-insulating (HPSI) substrate with trapping and thermal effects is presented. The 4H-SiC MESFET structure includes a stack of HPSI substrates and a uniformly doped channel layer. The trapping effects include both the effect of multiple deep-level traps in the substrate and surface traps between the gate to source/drain. The self-heating effects are also incorporated to obtain the accurate and realistic nature of the analytical model. The importance of the proposed model is emphasised through the inclusion of the recent and exact nature of the traps in the 4H-SiC HPSI substrate responsible for substrate compensation. The analytical model is used to exhibit DC I–V characteristics of the device with and without trapping and thermal effects. From the results, the current degradation is observed due to the surface and substrate trapping effects and the negative conductance introduced by the self-heating effect at a high drain voltage. The calculated results are compared with reported experimental and two-dimensional simulations (Silvaco®-TCAD). The proposed model also illustrates the effectiveness of the gate—source distance scaling effect compared to the gate—drain scaling effect in optimizing 4H-SiC MESFET performance. Results demonstrate that the proposed I–V model of 4H-SiC MESFET is suitable for realizing SiC based monolithic circuits (MMICs) on HPSI substrates. (semiconductor devices)

  5. Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation

    Directory of Open Access Journals (Sweden)

    Min Yan

    2016-07-01

    Full Text Available This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17 model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST sensitivity analysis. Then the optimized MOD_17 model was used to calibrate the Biome-BGC model by adjusting the sensitive ecophysiological parameters. Once the best match was found for the 10 selected forest plots for the 8-day GPP estimates from the optimized MOD_17 and from the Biome-BGC, the values of sensitive ecophysiological parameters were determined. The calibrated Biome-BGC model agreed better with the eddy covariance (EC measurements (R2 = 0.87, RMSE = 1.583 gC·m−2·d−1 than the original model did (R2 = 0.72, RMSE = 2.419 gC·m−2·d−1. To provide a best estimate of the true state of the model, the Ensemble Kalman Filter (EnKF was used to assimilate five years (of eight-day periods between 2003 and 2007 of Global LAnd Surface Satellite (GLASS LAI products into the calibrated Biome-BGC model. The results indicated that LAI simulated through the assimilated Biome-BGC agreed well with GLASS LAI. GPP performances obtained from the assimilated Biome-BGC were further improved and verified by EC measurements at the Changbai Mountains forest flux site (R2 = 0.92, RMSE = 1.261 gC·m−2·d−1.

  6. Energy system investment model incorporating heat pumps with thermal storage in buildings and buffer tanks

    International Nuclear Information System (INIS)

    Hedegaard, Karsten; Balyk, Olexandr

    2013-01-01

    Individual compression heat pumps constitute a potentially valuable resource in supporting wind power integration due to their economic competitiveness and possibilities for flexible operation. When analysing the system benefits of flexible heat pump operation, effects on investments should be taken into account. In this study, we present a model that facilitates analysing individual heat pumps and complementing heat storages in integration with the energy system, while optimising both investments and operation. The model incorporates thermal building dynamics and covers various heat storage options: passive heat storage in the building structure via radiator heating, active heat storage in concrete floors via floor heating, and use of thermal storage tanks for space heating and hot water. It is shown that the model is well qualified for analysing possibilities and system benefits of operating heat pumps flexibly. This includes prioritising heat pump operation for hours with low marginal electricity production costs, and peak load shaving resulting in a reduced need for peak and reserve capacity investments. - Highlights: • Model optimising heat pumps and heat storages in integration with the energy system. • Optimisation of both energy system investments and operation. • Heat storage in building structure and thermal storage tanks included. • Model well qualified for analysing system benefits of flexible heat pump operation. • Covers peak load shaving and operation prioritised for low electricity prices

  7. Incorporating social groups' responses in a descriptive model for second- and higher-order impact identification

    International Nuclear Information System (INIS)

    Sutheerawatthana, Pitch; Minato, Takayuki

    2010-01-01

    The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and then generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).

  8. Incorporating plant fossil data into species distribution models is not straightforward: Pitfalls and possible solutions

    Science.gov (United States)

    Moreno-Amat, Elena; Rubiales, Juan Manuel; Morales-Molino, César; García-Amorena, Ignacio

    2017-08-01

    The increasing development of species distribution models (SDMs) using palaeodata has created new prospects to address questions of evolution, ecology and biogeography from wider perspectives. Palaeobotanical data provide information on the past distribution of taxa at a given time and place and its incorporation on modelling has contributed to advancing the SDM field. This has allowed, for example, to calibrate models under past climate conditions or to validate projected models calibrated on current species distributions. However, these data also bear certain shortcomings when used in SDMs that may hinder the resulting ecological outcomes and eventually lead to misleading conclusions. Palaeodata may not be equivalent to present data, but instead frequently exhibit limitations and biases regarding species representation, taxonomy and chronological control, and their inclusion in SDMs should be carefully assessed. The limitations of palaeobotanical data applied to SDM studies are infrequently discussed and often neglected in the modelling literature; thus, we argue for the more careful selection and control of these data. We encourage authors to use palaeobotanical data in their SDMs studies and for doing so, we propose some recommendations to improve the robustness, reliability and significance of palaeo-SDM analyses.

  9. Modeling the Potential Effects of New Tobacco Products and Policies: A Dynamic Population Model for Multiple Product Use and Harm

    Science.gov (United States)

    Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; Brodsky, Nancy S.; Brown, Theresa J.; Choiniere, Conrad J.; Coleman, Blair N.; Paredes, Antonio; Apelberg, Benjamin J.

    2015-01-01

    Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health

  10. Incorporating teleconnection information into reservoir operating policies using Stochastic Dynamic Programming and a Hidden Markov Model

    Science.gov (United States)

    Turner, Sean; Galelli, Stefano; Wilcox, Karen

    2015-04-01

    Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating

  11. A generalized linear-quadratic model incorporating reciprocal time pattern of radiation damage repair

    International Nuclear Information System (INIS)

    Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta

    2012-01-01

    Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.

  12. AgMIP Training in Multiple Crop Models and Tools

    Science.gov (United States)

    Boote, Kenneth J.; Porter, Cheryl H.; Hargreaves, John; Hoogenboom, Gerrit; Thornburn, Peter; Mutter, Carolyn

    2015-01-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has the goal of using multiple crop models to evaluate climate impacts on agricultural production and food security in developed and developing countries. There are several major limitations that must be overcome to achieve this goal, including the need to train AgMIP regional research team (RRT) crop modelers to use models other than the ones they are currently familiar with, plus the need to harmonize and interconvert the disparate input file formats used for the various models. Two activities were followed to address these shortcomings among AgMIP RRTs to enable them to use multiple models to evaluate climate impacts on crop production and food security. We designed and conducted courses in which participants trained on two different sets of crop models, with emphasis on the model of least experience. In a second activity, the AgMIP IT group created templates for inputting data on soils, management, weather, and crops into AgMIP harmonized databases, and developed translation tools for converting the harmonized data into files that are ready for multiple crop model simulations. The strategies for creating and conducting the multi-model course and developing entry and translation tools are reviewed in this chapter.

  13. Petroacoustic Modelling of Heterolithic Sandstone Reservoirs: A Novel Approach to Gassmann Modelling Incorporating Sedimentological Constraints and NMR Porosity data

    Science.gov (United States)

    Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.

    2012-12-01

    Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which

  14. Incorporating Single-nucleotide Polymorphisms Into the Lyman Model to Improve Prediction of Radiation Pneumonitis

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, Susan L., E-mail: sltucker@mdanderson.org [Department of Bioinformatics and Computational Biology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li Minghuan [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Xu Ting; Gomez, Daniel [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yuan Xianglin [Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan (China); Yu Jinming [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Liu Zhensheng; Yin Ming; Guan Xiaoxiang; Wang Lie; Wei Qingyi [Department of Epidemiology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mohan, Radhe [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Vinogradskiy, Yevgeniy [University of Colorado School of Medicine, Aurora, Colorado (United States); Martel, Mary [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao Zhongxing [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2013-01-01

    Purpose: To determine whether single-nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor-{beta}, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials: Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGF{beta}, TNF{alpha}, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiation therapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (grade {>=}3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results: Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGF{beta}, VEGF, TNF{alpha}, XRCC1 and APEX1. With smoking status included in the multivariate model, the SNPs significantly associated with increased risk of RP were in genes for TGF{beta}, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions: This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk vs <10% risk of RP when they were exposed to high MLDs.

  15. A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk

    Directory of Open Access Journals (Sweden)

    Lewei Duan

    2013-01-01

    Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.

  16. Tools and Models for Integrating Multiple Cellular Networks

    Energy Technology Data Exchange (ETDEWEB)

    Gerstein, Mark [Yale Univ., New Haven, CT (United States). Gerstein Lab.

    2015-11-06

    In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed

  17. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat

  18. Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp [Kansai University, Department of Mathematics, Faculty of Engineering Science (Japan)

    2015-08-15

    This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and to evaluate the efficiency of the algorithm.

  19. Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models

    International Nuclear Information System (INIS)

    Yamazaki, Kazutoshi

    2015-01-01

    This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and to evaluate the efficiency of the algorithm

  20. Incorporation of defects into the central atoms model of a metallic glass

    International Nuclear Information System (INIS)

    Lass, Eric A.; Zhu Aiwu; Shiflet, G.J.; Joseph Poon, S.

    2011-01-01

    The central atoms model (CAM) of a metallic glass is extended to incorporate thermodynamically stable defects, similar to vacancies in a crystalline solid, within the amorphous structure. A bond deficiency (BD), which is the proposed defect present in all metallic glasses, is introduced into the CAM equations. Like vacancies in a crystalline solid, BDs are thermodynamically stable entities because of the increase in entropy associated with their creation, and there is an equilibrium concentration present in the glassy phase. When applied to Cu-Zr and Ni-Zr binary metallic glasses, the concentration of thermally induced BDs surrounding Zr atoms reaches a relatively constant value at the glass transition temperature, regardless of composition within a given glass system. Using this 'critical' defect concentration, the predicted temperatures at which the glass transition is expected to occur are in good agreement with the experimentally determined glass transition temperatures for both alloy systems.

  1. A Microdosimetric-Kinetic Model of Cell Killing by Irradiation from Permanently Incorporated Radionuclides.

    Science.gov (United States)

    Hawkins, Roland B

    2018-01-01

    An expression for the surviving fraction of a replicating population of cells exposed to permanently incorporated radionuclide is derived from the microdosimetric-kinetic model. It includes dependency on total implant dose, linear energy transfer (LET), decay rate of the radionuclide, the repair rate of potentially lethal lesions in DNA and the volume doubling time of the target population. This is used to obtain an expression for the biologically effective dose ( BED α / β ) based on the minimum survival achieved by the implant that is equivalent to, and can be compared and combined with, the BED α / β calculated for a fractionated course of radiation treatment. Approximate relationships are presented that are useful in the calculation of BED α / β for alpha- or beta-emitting radionuclides with half-life significantly greater than, or nearly equal to, the approximately 1-h repair half-life of radiation-induced potentially lethal lesions.

  2. Parametric modeling for damped sinusoids from multiple channels

    DEFF Research Database (Denmark)

    Zhou, Zhenhua; So, Hing Cheung; Christensen, Mads Græsbøll

    2013-01-01

    frequencies and damping factors are then computed with the multi-channel weighted linear prediction method. The estimated sinusoidal poles are then matched to each channel according to the extreme value theory of distribution of random fields. Simulations are performed to show the performance advantages......The problem of parametric modeling for noisy damped sinusoidal signals from multiple channels is addressed. Utilizing the shift invariance property of the signal subspace, the number of distinct sinusoidal poles in the multiple channels is first determined. With the estimated number, the distinct...... of the proposed multi-channel sinusoidal modeling methodology compared with existing methods....

  3. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Directory of Open Access Journals (Sweden)

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  4. Incorporation of Hydrogen Bond Angle Dependency into the Generalized Solvation Free Energy Density Model.

    Science.gov (United States)

    Ma, Songling; Hwang, Sungbo; Lee, Sehan; Acree, William E; No, Kyoung Tai

    2018-04-23

    To describe the physically realistic solvation free energy surface of a molecule in a solvent, a generalized version of the solvation free energy density (G-SFED) calculation method has been developed. In the G-SFED model, the contribution from the hydrogen bond (HB) between a solute and a solvent to the solvation free energy was calculated as the product of the acidity of the donor and the basicity of the acceptor of an HB pair. The acidity and basicity parameters of a solute were derived using the summation of acidities and basicities of the respective acidic and basic functional groups of the solute, and that of the solvent was experimentally determined. Although the contribution of HBs to the solvation free energy could be evenly distributed to grid points on the surface of a molecule, the G-SFED model was still inadequate to describe the angle dependency of the HB of a solute with a polarizable continuum solvent. To overcome this shortcoming of the G-SFED model, the contribution of HBs was formulated using the geometric parameters of the grid points described in the HB coordinate system of the solute. We propose an HB angle dependency incorporated into the G-SFED model, i.e., the G-SFED-HB model, where the angular-dependent acidity and basicity densities are defined and parametrized with experimental data. The G-SFED-HB model was then applied to calculate the solvation free energies of organic molecules in water, various alcohols and ethers, and the log P values of diverse organic molecules, including peptides and a protein. Both the G-SFED model and the G-SFED-HB model reproduced the experimental solvation free energies with similar accuracy, whereas the distributions of the SFED on the molecular surface calculated by the G-SFED and G-SFED-HB models were quite different, especially for molecules having HB donors or acceptors. Since the angle dependency of HBs was included in the G-SFED-HB model, the SFED distribution of the G-SFED-HB model is well described

  5. A Non-Isothermal Chemical Lattice Boltzmann Model Incorporating Thermal Reaction Kinetics and Enthalpy Changes

    Directory of Open Access Journals (Sweden)

    Stuart Bartlett

    2017-08-01

    Full Text Available The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.

  6. Incorporation of a Wind Generator Model into a Dynamic Power Flow Analysis

    Directory of Open Access Journals (Sweden)

    Angeles-Camacho C.

    2011-07-01

    Full Text Available Wind energy is nowadays one of the most cost-effective and practical options for electric generation from renewable resources. However, increased penetration of wind generation causes the power networks to be more depend on, and vulnerable to, the varying wind speed. Modeling is a tool which can provide valuable information about the interaction between wind farms and the power network to which they are connected. This paper develops a realistic characterization of a wind generator. The wind generator model is incorporated into an algorithm to investigate its contribution to the stability of the power network in the time domain. The tool obtained is termed dynamic power flow. The wind generator model takes on account the wind speed and the reactive power consumption by induction generators. Dynamic power flow analysis is carried-out using real wind data at 10-minute time intervals collected for one meteorological station. The generation injected at one point into the network provides active power locally and is found to reduce global power losses. However, the power supplied is time-varying and causes fluctuations in voltage magnitude and power fl ows in transmission lines.

  7. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  8. A collaborative scheduling model for the supply-hub with multiple suppliers and multiple manufacturers.

    Science.gov (United States)

    Li, Guo; Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.

  9. A Collaborative Scheduling Model for the Supply-Hub with Multiple Suppliers and Multiple Manufacturers

    Directory of Open Access Journals (Sweden)

    Guo Li

    2014-01-01

    Full Text Available This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.

  10. A Collaborative Scheduling Model for the Supply-Hub with Multiple Suppliers and Multiple Manufacturers

    Science.gov (United States)

    Lv, Fei; Guan, Xu

    2014-01-01

    This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104

  11. Incorporating an extended dendritic growth model into the CAFE model for rapidly solidified non-dilute alloys

    International Nuclear Information System (INIS)

    Ma, Jie; Wang, Bo; Zhao, Shunli; Wu, Guangxin; Zhang, Jieyu; Yang, Zhiliang

    2016-01-01

    We have extended the dendritic growth model first proposed by Boettinger, Coriell and Trivedi (here termed EBCT) for microstructure simulations of rapidly solidified non-dilute alloys. The temperature-dependent distribution coefficient, obtained from calculations of phase equilibria, and the continuous growth model (CGM) were adopted in the present EBCT model to describe the solute trapping behaviors. The temperature dependence of the physical properties, which were not used in previous dendritic growth models, were also considered in the present EBCT model. These extensions allow the present EBCT model to be used for microstructure simulations of non-dilute alloys. The comparison of the present EBCT model with the BCT model proves that the considerations of the distribution coefficient and physical properties are necessary for microstructure simulations, especially for small particles with high undercoolings. Finally, the EBCT model was incorporated into the cellular automaton-finite element (CAFE) model to simulate microstructures of gas-atomized ASP30 high speed steel particles that were then compared with experimental results. Both the simulated and experimental results reveal that a columnar dendritic microstructure preferentially forms in small particles and an equiaxed microstructure forms otherwise. The applications of the present EBCT model provide a convenient way to predict the microstructure of non-dilute alloys. - Highlights: • A dendritic growth model was developed considering non-equilibrium distribution coefficient. • The physical properties with temperature dependence were considered in the extended model. • The extended model can be used to non-dilute alloys and the extensions are necessary in small particles. • Microstructure of ASP30 steel was investigated using the present model and verified by experiment.

  12. Incorporating an extended dendritic growth model into the CAFE model for rapidly solidified non-dilute alloys

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Jie; Wang, Bo [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhao, Shunli [Research Institute, Baoshan Iron & Steel Co., Ltd, Shanghai 201900 (China); Wu, Guangxin [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhang, Jieyu, E-mail: zjy6162@staff.shu.edu.cn [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Yang, Zhiliang [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China)

    2016-05-25

    We have extended the dendritic growth model first proposed by Boettinger, Coriell and Trivedi (here termed EBCT) for microstructure simulations of rapidly solidified non-dilute alloys. The temperature-dependent distribution coefficient, obtained from calculations of phase equilibria, and the continuous growth model (CGM) were adopted in the present EBCT model to describe the solute trapping behaviors. The temperature dependence of the physical properties, which were not used in previous dendritic growth models, were also considered in the present EBCT model. These extensions allow the present EBCT model to be used for microstructure simulations of non-dilute alloys. The comparison of the present EBCT model with the BCT model proves that the considerations of the distribution coefficient and physical properties are necessary for microstructure simulations, especially for small particles with high undercoolings. Finally, the EBCT model was incorporated into the cellular automaton-finite element (CAFE) model to simulate microstructures of gas-atomized ASP30 high speed steel particles that were then compared with experimental results. Both the simulated and experimental results reveal that a columnar dendritic microstructure preferentially forms in small particles and an equiaxed microstructure forms otherwise. The applications of the present EBCT model provide a convenient way to predict the microstructure of non-dilute alloys. - Highlights: • A dendritic growth model was developed considering non-equilibrium distribution coefficient. • The physical properties with temperature dependence were considered in the extended model. • The extended model can be used to non-dilute alloys and the extensions are necessary in small particles. • Microstructure of ASP30 steel was investigated using the present model and verified by experiment.

  13. Incorporation of Satellite Data and Uncertainty in a Nationwide Groundwater Recharge Model in New Zealand

    Directory of Open Access Journals (Sweden)

    Rogier Westerhoff

    2018-01-01

    Full Text Available A nationwide model of groundwater recharge for New Zealand (NGRM, as described in this paper, demonstrated the benefits of satellite data and global models to improve the spatial definition of recharge and the estimation of recharge uncertainty. NGRM was inspired by the global-scale WaterGAP model but with the key development of rainfall recharge calculation on scales relevant to national- and catchment-scale studies (i.e., a 1 km × 1 km cell size and a monthly timestep in the period 2000–2014 provided by satellite data (i.e., MODIS-derived evapotranspiration, AET and vegetation in combination with national datasets of rainfall, elevation, soil and geology. The resulting nationwide model calculates groundwater recharge estimates, including their uncertainty, consistent across the country, which makes the model unique compared to all other New Zealand estimates targeted towards groundwater recharge. At the national scale, NGRM estimated an average recharge of 2500 m 3 /s, or 298 mm/year, with a model uncertainty of 17%. Those results were similar to the WaterGAP model, but the improved input data resulted in better spatial characteristics of recharge estimates. Multiple uncertainty analyses led to these main conclusions: the NGRM model could give valuable initial estimates in data-sparse areas, since it compared well to most ground-observed lysimeter data and local recharge models; and the nationwide input data of rainfall and geology caused the largest uncertainty in the model equation, which revealed that the satellite data could improve spatial characteristics without significantly increasing the uncertainty. Clearly the increasing volume and availability of large-scale satellite data is creating more opportunities for the application of national-scale models at the catchment, and smaller, scales. This should result in improved utility of these models including provision of initial estimates in data-sparse areas. Topics for future

  14. Double-multiple streamtube model for Darrieus in turbines

    Science.gov (United States)

    Paraschivoiu, I.

    1981-01-01

    An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.

  15. A multi-period, multi-regional generation expansion planning model incorporating unit commitment constraints

    International Nuclear Information System (INIS)

    Koltsaklis, Nikolaos E.; Georgiadis, Michael C.

    2015-01-01

    Highlights: • A short-term structured investment planning model has been developed. • Unit commitment problem is incorporated into the long-term planning horizon. • Inherent intermittency of renewables is modelled in a comprehensive way. • The impact of CO_2 emission pricing in long-term investment decisions is quantified. • The evolution of system’s marginal price is evaluated for all the planning horizon. - Abstract: This work presents a generic mixed integer linear programming (MILP) model that integrates the unit commitment problem (UCP), i.e., daily energy planning with the long-term generation expansion planning (GEP) framework. Typical daily constraints at an hourly level such as start-up and shut-down related decisions (start-up type, minimum up and down time, synchronization, soak and desynchronization time constraints), ramping limits, system reserve requirements are combined with representative yearly constraints such as power capacity additions, power generation bounds of each unit, peak reserve requirements, and energy policy issues (renewables penetration limits, CO_2 emissions cap and pricing). For modelling purposes, a representative day (24 h) of each month over a number of years has been employed in order to determine the optimal capacity additions, electricity market clearing prices, and daily operational planning of the studied power system. The model has been tested on an illustrative case study of the Greek power system. Our approach aims to provide useful insight into strategic and challenging decisions to be determined by investors and/or policy makers at a national and/or regional level by providing the optimal energy roadmap under real operating and design constraints.

  16. Multiple commodities in statistical microeconomics: Model and market

    Science.gov (United States)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  17. Risk Prediction Models for Other Cancers or Multiple Sites

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing other multiple cancers over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. An extension of the multiple-trapping model

    International Nuclear Information System (INIS)

    Shkilev, V. P.

    2012-01-01

    The hopping charge transport in disordered semiconductors is considered. Using the concept of the transport energy level, macroscopic equations are derived that extend a multiple-trapping model to the case of semiconductors with both energy and spatial disorders. It is shown that, although both types of disorder can cause dispersive transport, the frequency dependence of conductivity is determined exclusively by the spatial disorder.

  19. Selecting Tools to Model Integer and Binomial Multiplication

    Science.gov (United States)

    Pratt, Sarah Smitherman; Eddy, Colleen M.

    2017-01-01

    Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…

  20. Modeling single versus multiple systems in implicit and explicit memory.

    Science.gov (United States)

    Starns, Jeffrey J; Ratcliff, Roger; McKoon, Gail

    2012-04-01

    It is currently controversial whether priming on implicit tasks and discrimination on explicit recognition tests are supported by a single memory system or by multiple, independent systems. In a Psychological Review article, Berry and colleagues used mathematical modeling to address this question and provide compelling evidence against the independent-systems approach. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Green communication: The enabler to multiple business models

    DEFF Research Database (Denmark)

    Lindgren, Peter; Clemmensen, Suberia; Taran, Yariv

    2010-01-01

    Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...... possibility to enable lower energy consumption. This paper shows how green communication enables innovation of green business models and multiple business models running simultaneously in different markets to different customers.......Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...

  2. Infinite Multiple Membership Relational Modeling for Complex Networks

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai

    Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...

  3. Estimating disperser abundance using open population models that incorporate data from continuous detection PIT arrays

    Science.gov (United States)

    Dzul, Maria C.; Yackulic, Charles B.; Korman, Josh

    2017-01-01

    Autonomous passive integrated transponder (PIT) tag antenna systems continuously detect individually marked organisms at one or more fixed points over long time periods. Estimating abundance using data from autonomous antennae can be challenging, because these systems do not detect unmarked individuals. Here we pair PIT antennae data from a tributary with mark-recapture sampling data in a mainstem river to estimate the number of fish moving from the mainstem to the tributary. We then use our model to estimate abundance of non-native rainbow trout Oncorhynchus mykiss that move from the Colorado River to the Little Colorado River (LCR), the latter of which is important spawning and rearing habitat for federally-endangered humpback chub Gila cypha. We estimate 226 rainbow trout (95% CI: 127-370) entered the LCR from October 2013-April 2014. We discuss the challenges of incorporating detections from autonomous PIT antenna systems into mark-recapture population models, particularly in regards to using information about spatial location to estimate movement and detection probabilities.

  4. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  5. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  6. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  7. Exciton delocalization incorporated drift-diffusion model for bulk-heterojunction organic solar cells

    Science.gov (United States)

    Wang, Zi Shuai; Sha, Wei E. I.; Choy, Wallace C. H.

    2016-12-01

    Modeling the charge-generation process is highly important to understand device physics and optimize power conversion efficiency of bulk-heterojunction organic solar cells (OSCs). Free carriers are generated by both ultrafast exciton delocalization and slow exciton diffusion and dissociation at the heterojunction interface. In this work, we developed a systematic numerical simulation to describe the charge-generation process by a modified drift-diffusion model. The transport, recombination, and collection of free carriers are incorporated to fully capture the device response. The theoretical results match well with the state-of-the-art high-performance organic solar cells. It is demonstrated that the increase of exciton delocalization ratio reduces the energy loss in the exciton diffusion-dissociation process, and thus, significantly improves the device efficiency, especially for the short-circuit current. By changing the exciton delocalization ratio, OSC performances are comprehensively investigated under the conditions of short-circuit and open-circuit. Particularly, bulk recombination dependent fill factor saturation is unveiled and understood. As a fundamental electrical analysis of the delocalization mechanism, our work is important to understand and optimize the high-performance OSCs.

  8. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Science.gov (United States)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  9. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Directory of Open Access Journals (Sweden)

    T. R. Khan

    2017-10-01

    µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp  =  10 µm, relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  10. Spinal motor control system incorporates an internal model of limb dynamics.

    Science.gov (United States)

    Shimansky, Y P

    2000-10-01

    The existence and utilization of an internal representation of the controlled object is one of the most important features of the functioning of neural motor control systems. This study demonstrates that this property already exists at the level of the spinal motor control system (SMCS), which is capable of generating motor patterns for reflex rhythmic movements, such as locomotion and scratching, without the aid of the peripheral afferent feedback, but substantially modifies the generated activity in response to peripheral afferent stimuli. The SMCS is presented as an optimal control system whose optimality requires that it incorporate an internal model (IM) of the controlled object's dynamics. A novel functional mechanism for the integration of peripheral sensory signals with the corresponding predictive output from the IM, the summation of information precision (SIP) is proposed. In contrast to other models in which the correction of the internal representation of the controlled object's state is based on the calculation of a mismatch between the internal and external information sources, the SIP mechanism merges the information from these sources in order to optimize the precision of the controlled object's state estimate. It is demonstrated, based on scratching in decerebrate cats as an example of the spinal control of goal-directed movements, that the results of computer modeling agree with the experimental observations related to the SMCS's reactions to phasic and tonic peripheral afferent stimuli. It is also shown that the functional requirements imposed by the mathematical model of the SMCS comply with the current knowledge about the related properties of spinal neuronal circuitry. The crucial role of the spinal presynaptic inhibition mechanism in the neuronal implementation of SIP is elucidated. Important differences between the IM and a state predictor employed for compensating for a neural reflex time delay are discussed.

  11. Vehicle coordinated transportation dispatching model base on multiple crisis locations

    Science.gov (United States)

    Tian, Ran; Li, Shanwei; Yang, Guoying

    2018-05-01

    Many disastrous events are often caused after unconventional emergencies occur, and the requirements of disasters are often different. It is difficult for a single emergency resource center to satisfy such requirements at the same time. Therefore, how to coordinate the emergency resources stored by multiple emergency resource centers to various disaster sites requires the coordinated transportation of emergency vehicles. In this paper, according to the problem of emergency logistics coordination scheduling, based on the related constraints of emergency logistics transportation, an emergency resource scheduling model based on multiple disasters is established.

  12. Multiple Surrogate Modeling for Wire-Wrapped Fuel Assembly Optimization

    International Nuclear Information System (INIS)

    Raza, Wasim; Kim, Kwang-Yong

    2007-01-01

    In this work, shape optimization of seven pin wire wrapped fuel assembly has been carried out in conjunction with RANS analysis in order to evaluate the performances of surrogate models. Previously, Ahmad and Kim performed the flow and heat transfer analysis based on the three-dimensional RANS analysis. But numerical optimization has not been applied to the design of wire-wrapped fuel assembly, yet. Surrogate models are being widely used in multidisciplinary optimization. Queipo et al. reviewed various surrogates based models used in aerospace applications. Goel et al. developed weighted average surrogate model based on response surface approximation (RSA), radial basis neural network (RBNN) and Krigging (KRG) models. In addition to the three basic models, RSA, RBNN and KRG, the multiple surrogate model, PBA also has been employed. Two geometric design variables and a multi-objective function with a weighting factor have been considered for this problem

  13. Incorporating Geochemical And Microbial Kinetics In Reactive Transport Models For Generation Of Acid Rock Drainage

    Science.gov (United States)

    Andre, B. J.; Rajaram, H.; Silverstein, J.

    2010-12-01

    diffusion model at the scale of a single rock is developed incorporating the proposed kinetic rate expressions. Simulations of initiation, washout and AMD flows are discussed to gain a better understanding of the role of porosity, effective diffusivity and reactive surface area in generating AMD. Simulations indicate that flow boundary conditions control generation of acid rock drainage as porosity increases.

  14. Incorporating the user perspective into a proposed model for assessing success of SHS implementations

    Directory of Open Access Journals (Sweden)

    Hans Holtorf

    2015-10-01

    Full Text Available Modern energy can contribute to development in multiple ways while approximately 20% of world's populations do not yet have access to electricity. Solar Home Systems (SHSs consists of a PV module, a charge controller and a battery supply in the range of 100 Wh/d in Sunbelt countries. The question addressed in this paper is how SHS users approach success of their systems and how these user's views can be integrated in to an existing model of success. Information was obtained on the user's approach to their SHSs by participatory observation, interviews with users and by self-observation undertaken by the lead author while residing under SHS electricity supply conditions. It was found that success of SHSs from the users' point of view is related to the ability of these systems to reduce the burdens of supplying energy services to homesteads. SHSs can alleviate some energy supply burdens, and they can improve living conditions by enabling communication on multiple levels and by addressing convenience and safety concerns. However, SHSs do not contribute to the energy services which are indispensable for survival, nor to the thermal energy services required and desired in dwellings of Sunbelt countries. The elements of three of the four components of our previously proposed model of success have been verified and found to be appropriate, namely the user's self-set goals, their importance and SHSs' success factors. The locally appropriate, and scientifically satisfactory, measurement of the level of achievement of self-set goals, the fourth component of our model of success, remains an interesting area for future research.

  15. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  16. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Directory of Open Access Journals (Sweden)

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  17. Incorporation of Damage and Failure into an Orthotropic Elasto-Plastic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in the composite impact models currently available in LS-DYNA(Registered Trademark) is under development. In particular, the material model, which is being implemented as MAT 213 into a tailored version of LS-DYNA being jointly developed by the FAA and NASA, incorporates both plasticity and damage within the material model, utilizes experimentally based tabulated input to define the evolution of plasticity and damage as opposed to specifying discrete input parameters (such as modulus and strength), and is able to analyze the response of composites composed with a variety of fiber architectures. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. The capability to account for the rate and temperature dependent deformation response of composites has also been incorporated into the material model. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The onset of material failure, and thus element deletion, is being developed to be a function of the stresses and plastic strains in the various coordinate directions. Systematic procedures are being developed to generate the required input parameters based on the results of

  18. Incorporating Prognostic Marine Nitrogen Fixers and Related Bio-Physical Feedbacks in an Earth System Model

    Science.gov (United States)

    Paulsen, H.; Ilyina, T.; Six, K. D.

    2016-02-01

    Marine nitrogen fixers play a fundamental role in the oceanic nitrogen and carbon cycles by providing a major source of `new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Furthermore, nitrogen fixers may regionally have a direct impact on ocean physics and hence the climate system as they form extensive surface mats which can increase light absorption and surface albedo and reduce the momentum input by wind. Resulting alterations in temperature and stratification may feed back on nitrogen fixers' growth itself.We incorporate nitrogen fixers as a prognostic 3D tracer in the ocean biogeochemical component (HAMOCC) of the Max Planck Institute Earth system model and assess for the first time the impact of related bio-physical feedbacks on biogeochemistry and the climate system.The model successfully reproduces recent estimates of global nitrogen fixation rates, as well as the observed distribution of nitrogen fixers, covering large parts of the tropical and subtropical oceans. First results indicate that including bio-physical feedbacks has considerable effects on the upper ocean physics in this region. Light absorption by nitrogen fixers leads locally to surface heating, subsurface cooling, and mixed layer depth shoaling in the subtropical gyres. As a result, equatorial upwelling is increased, leading to surface cooling at the equator. This signal is damped by the effect of the reduced wind stress due to the presence of cyanobacteria mats, which causes a reduction in the wind-driven circulation, and hence a reduction in equatorial upwelling. The increase in surface albedo due to nitrogen fixers has only inconsiderable effects. The response of nitrogen fixers' growth to the alterations in temperature and stratification varies regionally. Simulations with the fully coupled Earth system model are in progress to assess the implications of the biologically induced changes in upper ocean physics for the global climate system.

  19. Using a cognitive architecture in educational and recreational games : How to incorporate a model in your App

    NARCIS (Netherlands)

    Taatgen, Niels A.; de Weerd, Harmen; Reitter, David; Ritter, Frank

    2016-01-01

    We present a Swift re-implementation of the ACT-R cognitive architecture, which can be used to quickly build iOS Apps that incorporate an ACT-R model as a core feature. We discuss how this implementation can be used in an example model, and explore the breadth of possibilities by presenting six Apps

  20. Adolescent Decision-Making Processes regarding University Entry: A Model Incorporating Cultural Orientation, Motivation and Occupational Variables

    Science.gov (United States)

    Jung, Jae Yup

    2013-01-01

    This study tested a newly developed model of the cognitive decision-making processes of senior high school students related to university entry. The model incorporated variables derived from motivation theory (i.e. expectancy-value theory and the theory of reasoned action), literature on cultural orientation and occupational considerations. A…

  1. Incorporating genetic variation into a model of budburst phenology of coast Douglas-fir (Pseudotsuga menziesii var

    Science.gov (United States)

    Peter J. Gould; Constance A. Harrington; Bradley J. St Clair

    2011-01-01

    Models to predict budburst and other phenological events in plants are needed to forecast how climate change may impact ecosystems and for the development of mitigation strategies. Differences among genotypes are important to predicting phenological events in species that show strong clinal variation in adaptive traits. We present a model that incorporates the effects...

  2. A diagnostic model incorporating P50 sensory gating and neuropsychological tests for schizophrenia.

    Directory of Open Access Journals (Sweden)

    Jia-Chi Shan

    Full Text Available OBJECTIVES: Endophenotypes in schizophrenia research is a contemporary approach to studying this heterogeneous mental illness, and several candidate neurophysiological markers (e.g. P50 sensory gating and neuropsychological tests (e.g. Continuous Performance Test (CPT and Wisconsin Card Sorting Test (WCST have been proposed. However, the clinical utility of a single marker appears to be limited. In the present study, we aimed to construct a diagnostic model incorporating P50 sensory gating with other neuropsychological tests in order to improve the clinical utility. METHODS: We recruited clinically stable outpatients meeting DSM-IV criteria of schizophrenia and age- and gender-matched healthy controls. Participants underwent P50 sensory gating experimental sessions and batteries of neuropsychological tests, including CPT, WCST and Wechsler Adult Intelligence Scale Third Edition (WAIS-III. RESULTS: A total of 106 schizophrenia patients and 74 healthy controls were enrolled. Compared with healthy controls, the patient group had significantly a larger S2 amplitude, and thus poorer P50 gating ratio (gating ratio = S2/S1. In addition, schizophrenia patients had a poorer performance on neuropsychological tests. We then developed a diagnostic model by using multivariable logistic regression analysis to differentiate patients from healthy controls. The final model included the following covariates: abnormal P50 gating (defined as P50 gating ratio >0.4, three subscales derived from the WAIS-III (Arithmetic, Block Design, and Performance IQ, sensitivity index from CPT and smoking status. This model had an adequate accuracy (concordant percentage = 90.4%; c-statistic = 0.904; Hosmer-Lemeshow Goodness-of-Fit Test, p = 0.64>0.05. CONCLUSION: To the best of our knowledge, this is the largest study to date using P50 sensory gating in subjects of Chinese ethnicity and the first to use P50 sensory gating along with other neuropsychological tests

  3. Incorporation of Fine-Grained Sediment Erodibility Measurements into Sediment Transport Modeling, Capitol Lake, Washington

    Science.gov (United States)

    Stevens, Andrew W.; Gelfenbaum, Guy; Elias, Edwin; Jones, Craig

    2008-01-01

    lab with Sedflume, an apparatus for measuring sediment erosion-parameters. In this report, we present results of the characterization of fine-grained sediment erodibility within Capitol Lake. The erodibility data were incorporated into the previously developed hydrodynamic and sediment transport model. Model simulations using the measured erodibility parameters were conducted to provide more robust estimates of the overall magnitudes and spatial patterns of sediment transport resulting from restoration of the Deschutes Estuary.

  4. Incorporating human-water dynamics in a hyper-resolution land surface model

    Science.gov (United States)

    Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.

    2017-12-01

    The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in

  5. Incorporation of GRACE Data into a Bayesian Model for Groundwater Drought Monitoring

    Science.gov (United States)

    Slinski, K.; Hogue, T. S.; McCray, J. E.; Porter, A.

    2015-12-01

    Groundwater drought, defined as the sustained occurrence of below average availability of groundwater, is marked by below average water levels in aquifers and reduced flows to groundwater-fed rivers and wetlands. The impact of groundwater drought on ecosystems, agriculture, municipal water supply, and the energy sector is an increasingly important global issue. However, current drought monitors heavily rely on precipitation and vegetative stress indices to characterize the timing, duration, and severity of drought events. The paucity of in situ observations of aquifer levels is a substantial obstacle to the development of systems to monitor groundwater drought in drought-prone areas, particularly in developing countries. Observations from the NASA/German Space Agency's Gravity Recovery and Climate Experiment (GRACE) have been used to estimate changes in groundwater storage over areas with sparse point measurements. This study incorporates GRACE total water storage observations into a Bayesian framework to assess the performance of a probabilistic model for monitoring groundwater drought based on remote sensing data. Overall, it is hoped that these methods will improve global drought preparedness and risk reduction by providing information on groundwater drought necessary to manage its impacts on ecosystems, as well as on the agricultural, municipal, and energy sectors.

  6. Incorporating organizational factors into probabilistic safety assessment of nuclear power plants through canonical probabilistic models

    Energy Technology Data Exchange (ETDEWEB)

    Galan, S.F. [Dpto. de Inteligencia Artificial, E.T.S.I. Informatica (UNED), Juan del Rosal, 16, 28040 Madrid (Spain)]. E-mail: seve@dia.uned.es; Mosleh, A. [2100A Marie Mount Hall, Materials and Nuclear Engineering Department, University of Maryland, College Park, MD 20742 (United States)]. E-mail: mosleh@umd.edu; Izquierdo, J.M. [Area de Modelado y Simulacion, Consejo de Seguridad Nuclear, Justo Dorado, 11, 28040 Madrid (Spain)]. E-mail: jmir@csn.es

    2007-08-15

    The {omega}-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the {omega}-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the {omega}-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents.

  7. Incorporating organizational factors into probabilistic safety assessment of nuclear power plants through canonical probabilistic models

    International Nuclear Information System (INIS)

    Galan, S.F.; Mosleh, A.; Izquierdo, J.M.

    2007-01-01

    The ω-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the ω-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the ω-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents

  8. A PDP model of the simultaneous perception of multiple objects

    Science.gov (United States)

    Henderson, Cynthia M.; McClelland, James L.

    2011-06-01

    Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.

  9. Supersymmetric U(1)' model with multiple dark matters

    International Nuclear Information System (INIS)

    Hur, Taeil; Lee, Hye-Sung; Nasri, Salah

    2008-01-01

    We consider a scenario where a supersymmetric model has multiple dark matter particles. Adding a U(1) ' gauge symmetry is a well-motivated extension of the minimal supersymmetric standard model (MSSM). It can cure the problems of the MSSM such as the μ problem or the proton decay problem with high-dimensional lepton number and baryon number violating operators which R parity allows. An extra parity (U parity) may arise as a residual discrete symmetry after U(1) ' gauge symmetry is spontaneously broken. The lightest U-parity particle (LUP) is stable under the new parity becoming a new dark matter candidate. Up to three massive particles can be stable in the presence of the R parity and the U parity. We numerically illustrate that multiple stable particles in our model can satisfy both constraints from the relic density and the direct detection, thus providing a specific scenario where a supersymmetric model has well-motivated multiple dark matters consistent with experimental constraints. The scenario provides new possibilities in the present and upcoming dark matter searches in the direct detection and collider experiments

  10. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  11. Incorporation of a health economic modelling tool into public health commissioning: Evidence use in a politicised context.

    Science.gov (United States)

    Sanders, Tom; Grove, Amy; Salway, Sarah; Hampshaw, Susan; Goyder, Elizabeth

    2017-08-01

    This paper explores how commissioners working in an English local government authority (LA) viewed a health economic decision tool for planning services in relation to diabetes. We conducted 15 interviews and 2 focus groups between July 2015 and February 2016, with commissioners (including public health managers, data analysts and council members). Two overlapping themes were identified explaining the obstacles and enablers of using such a tool in commissioning: a) evidence cultures, and b) system interdependency. The former highlighted the diverse evidence cultures present in the LA with politicians influenced by the 'soft' social care agendas affecting their local population and treating local opinion as evidence, whilst public health managers prioritised the scientific view of evidence informed by research. System interdependency further complicated the decision making process by recognizing interlinking with departments and other disease groups. To achieve legitimacy within the commissioning arena health economic modelling needs to function effectively in a highly politicised environment where decisions are made not only on the basis of research evidence, but on grounds of 'soft' data, personal opinion and intelligence. In this context decisions become politicised, with multiple opinions seeking a voice. The way that such decisions are negotiated and which ones establish authority is of importance. We analyse the data using Larson's (1990) discursive field concept to show how the tool becomes an object of research push and pull likely to be used instrumentally by stakeholders to advance specific agendas, not a means of informing complex decisions. In conclusion, LA decision making is underpinned by a transactional business ethic which is a further potential 'pull' mechanism for the incorporation of health economic modelling in local commissioning. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  12. An evaluation of a paediatric radiation oncology teaching programme incorporating a SCORPIO teaching model.

    Science.gov (United States)

    Ahern, Verity; Klein, Linda; Bentvelzen, Adam; Garlan, Karen; Jeffery, Heather

    2011-04-01

    Many radiation oncology registrars have no exposure to paediatrics during their training. To address this, the Paediatric Special Interest Group of the Royal Australian and New Zealand College of Radiologists has convened a biennial teaching course since 1997. The 2009 course incorporated the use of a Structured, Clinical, Objective-Referenced, Problem-orientated, Integrated and Organized (SCORPIO) teaching model for small group tutorials. This study evaluates whether the paediatric radiation oncology curriculum can be adapted to the SCORPIO teaching model and to evaluate the revised course from the registrars' perspective. Teaching and learning resources included a pre-course reading list, a lecture series programme and a SCORPIO workshop. Three evaluation instruments were developed: an overall Course Evaluation Survey for all participants, a SCORPIO Workshop Survey for registrars and a Teacher's SCORPIO Workshop Survey. Forty-five radiation oncology registrars, 14 radiation therapists and five paediatric oncology registrars attended. Seventy-three per cent (47/64) of all participants completed the Course Evaluation Survey and 95% (38/40) of registrars completed the SCORPIO Workshop Survey. All teachers completed the Teacher's SCORPIO Survey (10/10). The overall educational experience was rated as good or excellent by 93% (43/47) of respondents. Ratings of satisfaction with lecture sessions were predominantly good or excellent. Registrars gave the SCORPIO workshop high ratings on each of 10 aspects of quality, with 82% allocating an excellent rating overall for the SCORPIO activity. Both registrars and teachers recommended more time for the SCORPIO stations. The 2009 course met the educational needs of the radiation oncology registrars and the SCORPIO workshop was a highly valued educational component. © 2011 The Authors. Journal of Medical Imaging and Radiation Oncology © 2011 The Royal Australian and New Zealand College of Radiologists.

  13. Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study

    Science.gov (United States)

    Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2013-04-01

    The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique

  14. Challenges in LCA modelling of multiple loops for aluminium cans

    DEFF Research Database (Denmark)

    Niero, Monia; Olsen, Stig Irving

    considered the case of closed-loop recycling for aluminium cans, where body and lid are different alloys, and discussed the abovementioned challenge. The Life Cycle Inventory (LCI) modelling of aluminium processes is traditionally based on a pure aluminium flow, therefore neglecting the presence of alloying...... elements. We included the effect of alloying elements on the LCA modelling of aluminium can recycling. First, we performed a mass balance of the main alloying elements (Mn, Fe, Si, Cu) in aluminium can recycling at increasing levels of recycling rate. The analysis distinguished between different aluminium...... packaging scrap sources (i.e. used beverage can and mixed aluminium packaging) to understand the limiting factors for multiple loop aluminium can recycling. Secondly, we performed a comparative LCA of aluminium can production and recycling in multiple loops considering the two aluminium packaging scrap...

  15. Dealing with Multiple Solutions in Structural Vector Autoregressive Models.

    Science.gov (United States)

    Beltz, Adriene M; Molenaar, Peter C M

    2016-01-01

    Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.

  16. Global dynamics of a PDE model for aedes aegypti mosquitoe incorporating female sexual preference

    KAUST Repository

    Parshad, Rana; Agusto, Folashade B.

    2011-01-01

    In this paper we study the long time dynamics of a reaction diffusion system, describing the spread of Aedes aegypti mosquitoes, which are the primary cause of dengue infection. The system incorporates a control attempt via the sterile insect

  17. Automatic Generation of 3D Building Models with Multiple Roofs

    Institute of Scientific and Technical Information of China (English)

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  18. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  19. Incorporating classic adsorption isotherms into modern surface complexation models: implications for sorption of radionuclides

    International Nuclear Information System (INIS)

    Kulik, D.A.

    2005-01-01

    Full text of publication follows: Computer-aided surface complexation models (SCM) tend to replace the classic adsorption isotherm (AI) analysis in describing mineral-water interface reactions such as radionuclide sorption onto (hydr) oxides and clays. Any site-binding SCM based on the mole balance of surface sites, in fact, reproduces the (competitive) Langmuir isotherm, optionally amended with electrostatic Coulomb's non-ideal term. In most SCM implementations, it is difficult to incorporate real-surface phenomena (site heterogeneity, lateral interactions, surface condensation) described in classic AI approaches other than Langmuir's. Thermodynamic relations between SCMs and AIs that remained obscure in the past have been recently clarified using new definitions of standard and reference states of surface species [1,2]. On this basis, a method for separating the Langmuir AI into ideal (linear) and non-ideal parts [2] was applied to multi-dentate Langmuir, Frumkin, and BET isotherms. The aim of this work was to obtain the surface activity coefficient terms that make the SCM site mole balance constraints obsolete and, in this way, extend thermodynamic SCMs to cover sorption phenomena described by the respective AIs. The multi-dentate Langmuir term accounts for the site saturation with n-dentate surface species, as illustrated on modeling bi-dentate U VI complexes on goethite or SiO 2 surfaces. The Frumkin term corrects for the lateral interactions of the mono-dentate surface species; in particular, it has the same form as the Coulombic term of the constant-capacitance EDL combined with the Langmuir term. The BET term (three parameters) accounts for more than a monolayer adsorption up to the surface condensation; it can potentially describe the surface precipitation of nickel and other cations on hydroxides and clay minerals. All three non-ideal terms (in GEM SCMs implementation [1,2]) by now are used for non-competing surface species only. Upon 'surface dilution

  20. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  1. Modelling of diffuse solar fraction with multiple predictors

    Energy Technology Data Exchange (ETDEWEB)

    Ridley, Barbara; Boland, John [Centre for Industrial and Applied Mathematics, University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, SA 5095 (Australia); Lauret, Philippe [Laboratoire de Physique du Batiment et des Systemes, University of La Reunion, Reunion (France)

    2010-02-15

    For some locations both global and diffuse solar radiation are measured. However, for many locations, only global radiation is measured, or inferred from satellite data. For modelling solar energy applications, the amount of radiation on a tilted surface is needed. Since only the direct component on a tilted surface can be calculated from direct on some other plane using trigonometry, we need to have diffuse radiation on the horizontal plane available. There are regression relationships for estimating the diffuse on a tilted surface from diffuse on the horizontal. Models for estimating the diffuse on the horizontal from horizontal global that have been developed in Europe or North America have proved to be inadequate for Australia. Boland et al. developed a validated model for Australian conditions. Boland et al. detailed our recent advances in developing the theoretical framework for the use of the logistic function instead of piecewise linear or simple nonlinear functions and was the first step in identifying the means for developing a generic model for estimating diffuse from global and other predictors. We have developed a multiple predictor model, which is much simpler than previous models, and uses hourly clearness index, daily clearness index, solar altitude, apparent solar time and a measure of persistence of global radiation level as predictors. This model performs marginally better than currently used models for locations in the Northern Hemisphere and substantially better for Southern Hemisphere locations. We suggest it can be used as a universal model. (author)

  2. Modeling Carbon Turnover in Five Terrestrial Ecosystems in the Boreal Zone Using Multiple Criteria of Acceptance

    International Nuclear Information System (INIS)

    Karlberg, Louise; Gustafsson, David; Jansson, Per-Erik

    2006-01-01

    Estimates of carbon fluxes and turnover in ecosystems are key elements in the understanding of climate change and in predicting the accumulation of trace elements in the biosphere. In this paper we present estimates of carbon fluxes and turnover times for five terrestrial ecosystems using a modeling approach. Multiple criteria of acceptance were used to parameterize the model, thus incorporating large amounts of multi-faceted empirical data in the simulations in a standardized manner. Mean turnover times of carbon were found to be rather similar between systems with a few exceptions, even though the size of both the pools and the fluxes varied substantially. Depending on the route of the carbon through the ecosystem, turnover times varied from less than one year to more than one hundred, which may be of importance when considering trace element transport and retention. The parameterization method was useful both in the estimation of unknown parameters, and to identify variability in carbon turnover in the selected ecosystems

  3. Incorporating Water Boiling in the Numerical Modelling of Thermal Remediation by Electrical Resistance Heating

    Science.gov (United States)

    Molnar, I. L.; Krol, M.; Mumford, K. G.

    2017-12-01

    Developing numerical models for subsurface thermal remediation techniques - such as Electrical Resistive Heating (ERH) - that include multiphase processes such as in-situ water boiling, gas production and recovery has remained a significant challenge. These subsurface gas generation and recovery processes are driven by physical phenomena such as discrete and unstable gas (bubble) flow as well as water-gas phase mass transfer rates during bubble flow. Traditional approaches to multiphase flow modeling soil remain unable to accurately describe these phenomena. However, it has been demonstrated that Macroscopic Invasion Percolation (MIP) can successfully simulate discrete and unstable gas transport1. This has lead to the development of a coupled Electro Thermal-MIP Model2 (ET-MIP) capable of simulating multiple key processes in the thermal remediation and gas recovery process including: electrical heating of soil and groundwater, water flow, geological heterogeneity, heating-induced buoyant flow, water boiling, gas bubble generation and mobilization, contaminant mass transport and removal, and additional mechanisms such as bubble collapse in cooler regions. This study presents the first rigorous validation of a coupled ET-MIP model against two-dimensional water boiling and water/NAPL co-boiling experiments3. Once validated, the model was used to explore the impact of water and co-boiling events and subsequent gas generation and mobilization on ERH's ability to 1) generate, expand and mobilize gas at boiling and NAPL co-boiling temperatures, 2) efficiently strip contaminants from soil during both boiling and co-boiling. In addition, a quantification of the energy losses arising from steam generation during subsurface water boiling was examined with respect to its impact on the efficacy of thermal remediation. While this study specifically targets ERH, the study's focus on examining the fundamental mechanisms driving thermal remediation (e.g., water boiling) renders

  4. Rapidity correlations at fixed multiplicity in cluster emission models

    CERN Document Server

    Berger, M C

    1975-01-01

    Rapidity correlations in the central region among hadrons produced in proton-proton collisions of fixed final state multiplicity n at NAL and ISR energies are investigated in a two-step framework in which clusters of hadrons are emitted essentially independently, via a multiperipheral-like model, and decay isotropically. For n>or approximately=/sup 1///sub 2/(n), these semi-inclusive distributions are controlled by the reaction mechanism which dominates production in the central region. Thus, data offer cleaner insight into the properties of this mechanism than can be obtained from fully inclusive spectra. A method of experimental analysis is suggested to facilitate the extraction of new dynamical information. It is shown that the n independence of the magnitude of semi-inclusive correlation functions reflects directly the structure of the internal cluster multiplicity distribution. This conclusion is independent of certain assumptions concerning the form of the single cluster density in rapidity space. (23 r...

  5. Multiplicative Attribute Graph Model of Real-World Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)

    2010-10-20

    Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.

  6. Dynamic coordinated control laws in multiple agent models

    International Nuclear Information System (INIS)

    Morgan, David S.; Schwartz, Ira B.

    2005-01-01

    We present an active control scheme of a kinetic model of swarming. It has been shown previously that the global control scheme for the model, presented in [Systems Control Lett. 52 (2004) 25], gives rise to spontaneous collective organization of agents into a unified coherent swarm, via steering controls and utilizing long-range attractive and short-range repulsive interactions. We extend these results by presenting control laws whereby a single swarm is broken into independently functioning subswarm clusters. The transition between one coordinated swarm and multiple clustered subswarms is managed simply with a homotopy parameter. Additionally, we present as an alternate formulation, a local control law for the same model, which implements dynamic barrier avoidance behavior, and in which swarm coherence emerges spontaneously

  7. Laplace transform analysis of a multiplicative asset transfer model

    Science.gov (United States)

    Sokolov, Andrey; Melatos, Andrew; Kieu, Tien

    2010-07-01

    We analyze a simple asset transfer model in which the transfer amount is a fixed fraction f of the giver’s wealth. The model is analyzed in a new way by Laplace transforming the master equation, solving it analytically and numerically for the steady-state distribution, and exploring the solutions for various values of f∈(0,1). The Laplace transform analysis is superior to agent-based simulations as it does not depend on the number of agents, enabling us to study entropy and inequality in regimes that are costly to address with simulations. We demonstrate that Boltzmann entropy is not a suitable (e.g. non-monotonic) measure of disorder in a multiplicative asset transfer system and suggest an asymmetric stochastic process that is equivalent to the asset transfer model.

  8. Bayesian inference based modelling for gene transcriptional dynamics by integrating multiple source of knowledge

    Directory of Open Access Journals (Sweden)

    Wang Shu-Qiang

    2012-07-01

    Full Text Available Abstract Background A key challenge in the post genome era is to identify genome-wide transcriptional regulatory networks, which specify the interactions between transcription factors and their target genes. Numerous methods have been developed for reconstructing gene regulatory networks from expression data. However, most of them are based on coarse grained qualitative models, and cannot provide a quantitative view of regulatory systems. Results A binding affinity based regulatory model is proposed to quantify the transcriptional regulatory network. Multiple quantities, including binding affinity and the activity level of transcription factor (TF are incorporated into a general learning model. The sequence features of the promoter and the possible occupancy of nucleosomes are exploited to estimate the binding probability of regulators. Comparing with the previous models that only employ microarray data, the proposed model can bridge the gap between the relative background frequency of the observed nucleotide and the gene's transcription rate. Conclusions We testify the proposed approach on two real-world microarray datasets. Experimental results show that the proposed model can effectively identify the parameters and the activity level of TF. Moreover, the kinetic parameters introduced in the proposed model can reveal more biological sense than previous models can do.

  9. Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations

    Science.gov (United States)

    Le, Phuong Dong; Leonard, Michael; Westra, Seth

    2018-03-01

    Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.

  10. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza

    2017-02-08

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  11. A multiple relevance feedback strategy with positive and negative models.

    Directory of Open Access Journals (Sweden)

    Yunlong Ma

    Full Text Available A commonly used strategy to improve search accuracy is through feedback techniques. Most existing work on feedback relies on positive information, and has been extensively studied in information retrieval. However, when a query topic is difficult and the results from the first-pass retrieval are very poor, it is impossible to extract enough useful terms from a few positive documents. Therefore, the positive feedback strategy is incapable to improve retrieval in this situation. Contrarily, there is a relatively large number of negative documents in the top of the result list, and it has been confirmed that negative feedback strategy is an important and useful way for adapting this scenario by several recent studies. In this paper, we consider a scenario when the search results are so poor that there are at most three relevant documents in the top twenty documents. Then, we conduct a novel study of multiple strategies for relevance feedback using both positive and negative examples from the first-pass retrieval to improve retrieval accuracy for such difficult queries. Experimental results on these TREC collections show that the proposed language model based multiple model feedback method which is generally more effective than both the baseline method and the methods using only positive or negative model.

  12. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  13. Incorporating a Time Horizon in Rate-of-Return Estimations: Discounted Cash Flow Model in Electric Transmission Rate Cases

    International Nuclear Information System (INIS)

    Chatterjee, Bishu; Sharp, Peter A.

    2006-01-01

    Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)

  14. Many-electron model for multiple ionization in atomic collisions

    International Nuclear Information System (INIS)

    Archubi, C D; Montanari, C C; Miraglia, J E

    2007-01-01

    We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data

  15. Many-electron model for multiple ionization in atomic collisions

    Energy Technology Data Exchange (ETDEWEB)

    Archubi, C D [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Montanari, C C [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina); Miraglia, J E [Instituto de AstronomIa y Fisica del Espacio, Casilla de Correo 67, Sucursal 28 (C1428EGA) Buenos Aires (Argentina)

    2007-03-14

    We have developed a many-electron model for multiple ionization of heavy atoms bombarded by bare ions. It is based on the transport equation for an ion in an inhomogeneous electronic density. Ionization probabilities are obtained by employing the shell-to-shell local plasma approximation with the Levine and Louie dielectric function to take into account the binding energy of each shell. Post-collisional contributions due to Auger-like processes are taken into account by employing recent photoemission data. Results for single-to-quadruple ionization of Ne, Ar, Kr and Xe by protons are presented showing a very good agreement with experimental data.

  16. Strategies for Incorporating Women-Specific Sexuality Education into Addiction Treatment Models

    Science.gov (United States)

    James, Raven

    2007-01-01

    This paper advocates for the incorporation of a women-specific sexuality curriculum in the addiction treatment process to aid in sexual healing and provide for aftercare issues. Sexuality in addiction treatment modalities is often approached from a sex-negative stance, or that of sexual victimization. Sexual issues are viewed as addictive in and…

  17. Model selection in Bayesian segmentation of multiple DNA alignments.

    Science.gov (United States)

    Oldmeadow, Christopher; Keith, Jonathan M

    2011-03-01

    The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.

  18. Multiplicative point process as a model of trading activity

    Science.gov (United States)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  19. Guideline validation in multiple trauma care through business process modeling.

    Science.gov (United States)

    Stausberg, Jürgen; Bilir, Hüseyin; Waydhas, Christian; Ruchholtz, Steffen

    2003-07-01

    Clinical guidelines can improve the quality of care in multiple trauma. In our Department of Trauma Surgery a specific guideline is available paper-based as a set of flowcharts. This format is appropriate for the use by experienced physicians but insufficient for electronic support of learning, workflow and process optimization. A formal and logically consistent version represented with a standardized meta-model is necessary for automatic processing. In our project we transferred the paper-based into an electronic format and analyzed the structure with respect to formal errors. Several errors were detected in seven error categories. The errors were corrected to reach a formally and logically consistent process model. In a second step the clinical content of the guideline was revised interactively using a process-modeling tool. Our study reveals that guideline development should be assisted by process modeling tools, which check the content in comparison to a meta-model. The meta-model itself could support the domain experts in formulating their knowledge systematically. To assure sustainability of guideline development a representation independent of specific applications or specific provider is necessary. Then, clinical guidelines could be used for eLearning, process optimization and workflow management additionally.

  20. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  1. Modeling of Particle Acceleration at Multiple Shocks Via Diffusive Shock Acceleration: Preliminary Results

    Science.gov (United States)

    Parker, L. N.; Zank, G. P.

    2013-12-01

    Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).

  2. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  3. Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation

    KAUST Repository

    Li, Muxingzi

    2017-04-24

    Optical Coherence Tomography (OCT) is a coherence-gated, micrometer-resolution imaging technique that focuses a broadband near-infrared laser beam to penetrate into optical scattering media, e.g. biological tissues. The OCT resolution is split into two parts, with the axial resolution defined by half the coherence length, and the depth-dependent lateral resolution determined by the beam geometry, which is well described by a Gaussian beam model. The depth dependence of lateral resolution directly results in the defocusing effect outside the confocal region and restricts current OCT probes to small numerical aperture (NA) at the expense of lateral resolution near the focus. Another limitation on OCT development is the presence of a mixture of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous papers have adopted the first Born approximation with the assumption of small perturbation of the incident field in inhomogeneous media. The Rytov method of the same order with smooth phase perturbation assumption benefits from a wider spatial range of validity. A deconvolution method for solving the inverse problem associated with the first Rytov approximation is developed, significantly reducing the defocusing effect through depth and therefore extending the feasible range of NA.

  4. Resveratrol Neuroprotection in a Chronic Mouse Model of Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Zoe eFonseca-Kelly

    2012-05-01

    Full Text Available Resveratrol is a naturally-occurring polyphenol that activates SIRT1, an NAD-dependent deacetylase. SRT501, a pharmaceutical formulation of resveratrol with enhanced systemic absorption, prevents neuronal loss without suppressing inflammation in mice with relapsing experimental autoimmune encephalomyelitis (EAE, a model of multiple sclerosis. In contrast, resveratrol has been reported to suppress inflammation in chronic EAE, although neuroprotective effects were not evaluated. The current studies examine potential neuroprotective and immunomodulatory effects of resveratrol in chronic EAE induced by immunization with myelin oligodendroglial glycoprotein peptide in C57/Bl6 mice. Effects of two distinct formulations of resveratrol administered daily orally were compared. Resveratrol delayed the onset of EAE compared to vehicle-treated EAE mice, but did not prevent or alter the phenotype of inflammation in spinal cords or optic nerves. Significant neuroprotective effects were observed, with higher numbers of retinal ganglion cells found in eyes of resveratrol-treated EAE mice with optic nerve inflammation. Results demonstrate that resveratrol prevents neuronal loss in this chronic demyelinating disease model, similar to its effects in relapsing EAE. Differences in immunosuppression compared with prior studies suggest that immunomodulatory effects may be limited and may depend on specific immunization parameters or timing of treatment. Importantly, neuroprotective effects can occur without immunosuppression, suggesting a potential additive benefit of resveratrol in combination with anti-inflammatory therapies for multiple sclerosis.

  5. Model for CO2 leakage including multiple geological layers and multiple leaky wells.

    Science.gov (United States)

    Nordbotten, Jan M; Kavetski, Dmitri; Celia, Michael A; Bachu, Stefan

    2009-02-01

    Geological storage of carbon dioxide (CO2) is likely to be an integral component of any realistic plan to reduce anthropogenic greenhouse gas emissions. In conjunction with large-scale deployment of carbon storage as a technology, there is an urgent need for tools which provide reliable and quick assessments of aquifer storage performance. Previously, abandoned wells from over a century of oil and gas exploration and production have been identified as critical potential leakage paths. The practical importance of abandoned wells is emphasized by the correlation of heavy CO2 emitters (typically associated with industrialized areas) to oil and gas producing regions in North America. Herein, we describe a novel framework for predicting the leakage from large numbers of abandoned wells, forming leakage paths connecting multiple subsurface permeable formations. The framework is designed to exploit analytical solutions to various components of the problem and, ultimately, leads to a grid-free approximation to CO2 and brine leakage rates, as well as fluid distributions. We apply our model in a comparison to an established numerical solverforthe underlying governing equations. Thereafter, we demonstrate the capabilities of the model on typical field data taken from the vicinity of Edmonton, Alberta. This data set consists of over 500 wells and 7 permeable formations. Results show the flexibility and utility of the solution methods, and highlight the role that analytical and semianalytical solutions can play in this important problem.

  6. A Multiple Indicators Multiple Causes (MIMIC) model of internal barriers to drug treatment in China.

    Science.gov (United States)

    Qi, Chang; Kelly, Brian C; Liao, Yanhui; He, Haoyu; Luo, Tao; Deng, Huiqiong; Liu, Tieqiao; Hao, Wei; Wang, Jichuan

    2015-03-01

    Although evidence exists for distinct barriers to drug abuse treatment (BDATs), investigations of their inter-relationships and the effect of individual characteristics on the barrier factors have been sparse, especially in China. A Multiple Indicators Multiple Causes (MIMIC) model is applied for this target. A sample of 262 drug users were recruited from three drug rehabilitation centers in Hunan Province, China. We applied a MIMIC approach to investigate the effect of gender, age, marital status, education, primary substance use, duration of primary drug use, and drug treatment experience on the internal barrier factors: absence of problem (AP), negative social support (NSS), fear of treatment (FT), and privacy concerns (PC). Drug users of various characteristics were found to report different internal barrier factors. Younger participants were more likely to report NSS (-0.19, p=0.038) and PC (-0.31, p<0.001). Compared to other drug users, ice users were more likely to report AP (0.44, p<0.001) and NSS (0.25, p=0.010). Drug treatment experiences related to AP (0.20, p=0.012). In addition, differential item functioning (DIF) occurred in three items when participant from groups with different duration of drug use, ice use, or marital status. Individual characteristics had significant effects on internal barriers to drug treatment. On this basis, BDAT perceived by different individuals could be assessed before tactics were utilized to successfully remove perceived barriers to drug treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    International Nuclear Information System (INIS)

    Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun

    2011-01-01

    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.

  8. Optimal Retail Price Model for Partial Consignment to Multiple Retailers

    Directory of Open Access Journals (Sweden)

    Po-Yu Chen

    2017-01-01

    Full Text Available This paper investigates the product pricing decision-making problem under a consignment stock policy in a two-level supply chain composed of one supplier and multiple retailers. The effects of the supplier’s wholesale prices and its partial inventory cost absorption of the retail prices of retailers with different market shares are investigated. In the partial product consignment model this paper proposes, the seller and the retailers each absorb part of the inventory costs. This model also provides general solutions for the complete product consignment and the traditional policy that adopts no product consignment. In other words, both the complete consignment and nonconsignment models are extensions of the proposed model (i.e., special cases. Research results indicated that the optimal retail price must be between 1/2 (50% and 2/3 (66.67% times the upper limit of the gross profit. This study also explored the results and influence of parameter variations on optimal retail price in the model.

  9. Incorporating creditors' seniority into contingent claim models: Applicarion to peripheral euro area countries

    OpenAIRE

    Gómez-Puig, Marta; Singh, Manish Kumar; Sosvilla Rivero, Simón, 1961-

    2018-01-01

    This paper highlights the role of multilateral creditors (i.e., the ECB, IMF, ESM etc.) and their preferred creditor status in explaining the sovereign default risk of peripheral euro area (EA) countries. Incorporating lessons from sovereign debt crises in general, and from the Greek debt restructuring in particular, we define the priority structure of sovereigns' creditors that is most relevant for peripheral EA countries in severe crisis episodes. This new priority structure of creditors, t...

  10. Dynamic information architecture system (DIAS) : multiple model simulation management

    International Nuclear Information System (INIS)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-01-01

    simulation; execute an Entity's behavior; and, of course, change the state of an Entity. In summary, the flexibility of the DIAS software infrastructure offers the ability to address a complex problem by allowing many disparate multidisciplinary simulation models and other applications to work together within a common framework. This inherent flexibility allows application developers to more easily incorporate new data, concepts, and technologies into the simulation framework, bringing the best available knowledge, science, and technology to bear on decision-making processes

  11. Dynamic information architecture system (DIAS) : multiple model simulation management.

    Energy Technology Data Exchange (ETDEWEB)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-05-13

    can schedule other events; create or remove Entities from the simulation; execute an Entity's behavior; and, of course, change the state of an Entity. In summary, the flexibility of the DIAS software infrastructure offers the ability to address a complex problem by allowing many disparate multidisciplinary simulation models and other applications to work together within a common framework. This inherent flexibility allows application developers to more easily incorporate new data, concepts, and technologies into the simulation framework, bringing the best available knowledge, science, and technology to bear on decision-making processes.

  12. Incorporation of oxygen contribution by plant roots into classical dissolved oxygen deficit model for a subsurface flow treatment wetland.

    Science.gov (United States)

    Bezbaruah, Achintya N; Zhang, Tian C

    2009-01-01

    It has been long established that plants play major roles in a treatment wetland. However, the role of plants has not been incorporated into wetland models. This study tries to incorporate wetland plants into a biochemical oxygen demand (BOD) model so that the relative contributions of the aerobic and anaerobic processes to meeting BOD can be quantitatively determined. The classical dissolved oxygen (DO) deficit model has been modified to simulate the DO curve for a field subsurface flow constructed wetland (SFCW) treating municipal wastewater. Sensitivities of model parameters have been analyzed. Based on the model it is predicted that in the SFCW under study about 64% BOD are degraded through aerobic routes and 36% is degraded anaerobically. While not exhaustive, this preliminary work should serve as a pointer for further research in wetland model development and to determine the values of some of the parameters used in the modified DO deficit and associated BOD model. It should be noted that nitrogen cycle and effects of temperature have not been addressed in these models for simplicity of model formulation. This paper should be read with this caveat in mind.

  13. Hyperspectral material identification on radiance data using single-atmosphere or multiple-atmosphere modeling

    Science.gov (United States)

    Mariano, Adrian V.; Grossmann, John M.

    2010-11-01

    Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.

  14. A latent class multiple constraint multiple discrete-continuous extreme value model of time use and goods consumption.

    Science.gov (United States)

    2016-06-01

    This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...

  15. Using hidden Markov models to align multiple sequences.

    Science.gov (United States)

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  16. Analysis and application of opinion model with multiple topic interactions.

    Science.gov (United States)

    Xiong, Fei; Liu, Yun; Wang, Liang; Wang, Ximeng

    2017-08-01

    To reveal heterogeneous behaviors of opinion evolution in different scenarios, we propose an opinion model with topic interactions. Individual opinions and topic features are represented by a multidimensional vector. We measure an agent's action towards a specific topic by the product of opinion and topic feature. When pairs of agents interact for a topic, their actions are introduced to opinion updates with bounded confidence. Simulation results show that a transition from a disordered state to a consensus state occurs at a critical point of the tolerance threshold, which depends on the opinion dimension. The critical point increases as the dimension of opinions increases. Multiple topics promote opinion interactions and lead to the formation of macroscopic opinion clusters. In addition, more topics accelerate the evolutionary process and weaken the effect of network topology. We use two sets of large-scale real data to evaluate the model, and the results prove its effectiveness in characterizing a real evolutionary process. Our model achieves high performance in individual action prediction and even outperforms state-of-the-art methods. Meanwhile, our model has much smaller computational complexity. This paper provides a demonstration for possible practical applications of theoretical opinion dynamics.

  17. A mathematical model to determine incorporated quantities of radioactivity from the measured photometric values of tritium-autoradiographs in neuroanatomy

    International Nuclear Information System (INIS)

    Jennissen, J.J.

    1981-01-01

    The mathematical/empirical model developed in this paper helps to determine the incorporated radioactivity from the measured photometric values and the exposure time T. Possible errors of autoradiography due to the exposure time or the preparation are taken into consideration by the empirical model. It is shown that the error of appr. 400% appearing in the sole comparison of the measured photometric values can be corrected. The model is valid for neuroanatomy as optical nerves, i.e. neuroanatomical material, were used to develop it. Its application also to the other sections of the central nervous system seems to be justified due to the reduction of errors thus achieved. (orig.) [de

  18. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  19. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  20. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo

  1. A mechano-regulatory bone-healing model incorporating cell-phenotype specific activity

    NARCIS (Netherlands)

    Isaksson, H.E.; Donkelaar, van C.C.; Huiskes, R.; Ito, K.

    2008-01-01

    Phenomenological computational models of tissue regeneration and bone healing have been only partially successful in predicting experimental observations. This may be a result of simplistic modeling of cellular activity. Furthermore, phenomenological models are limited when considering the effects

  2. Integrating multiple distribution models to guide conservation efforts of an endangered toad

    Science.gov (United States)

    Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.

    2015-01-01

    Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.

  3. Interaction of multiple biomimetic antimicrobial polymers with model bacterial membranes

    Energy Technology Data Exchange (ETDEWEB)

    Baul, Upayan, E-mail: upayanb@imsc.res.in; Vemparala, Satyavani, E-mail: vani@imsc.res.in [The Institute of Mathematical Sciences, C.I.T. Campus, Taramani, Chennai 600113 (India); Kuroda, Kenichi, E-mail: kkuroda@umich.edu [Department of Biologic and Materials Sciences, University of Michigan School of Dentistry, Ann Arbor, Michigan 48109 (United States)

    2014-08-28

    Using atomistic molecular dynamics simulations, interaction of multiple synthetic random copolymers based on methacrylates on prototypical bacterial membranes is investigated. The simulations show that the cationic polymers form a micellar aggregate in water phase and the aggregate, when interacting with the bacterial membrane, induces clustering of oppositely charged anionic lipid molecules to form clusters and enhances ordering of lipid chains. The model bacterial membrane, consequently, develops lateral inhomogeneity in membrane thickness profile compared to polymer-free system. The individual polymers in the aggregate are released into the bacterial membrane in a phased manner and the simulations suggest that the most probable location of the partitioned polymers is near the 1-palmitoyl-2-oleoyl-phosphatidylglycerol (POPG) clusters. The partitioned polymers preferentially adopt facially amphiphilic conformations at lipid-water interface, despite lacking intrinsic secondary structures such as α-helix or β-sheet found in naturally occurring antimicrobial peptides.

  4. [Incorporation of an organic MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using independent data sources]. [MAGIC Model

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.J.

    1992-09-01

    A project was initiated in March, 1992 to (1) incorporate a rigorous organic acid representation, based on empirical data and geochemical considerations, into the MAGIC model of acidification response, and (2) test the revised model using three sets of independent data. After six months of performance, the project is on schedule and the majority of the tasks outlined for Year 1 have been successfully completed. Major accomplishments to data include development of the organic acid modeling approach, using data from the Adirondack Lakes Survey Corporation (ALSC), and coupling the organic acid model with MAGIC for chemical hindcast comparisons. The incorporation of an organic acid representation into MAGIC can account for much of the discrepancy earlier observed between MAGIC hindcasts and paleolimnological reconstructions of preindustrial pH and alkalinity for 33 statistically-selected Adirondack lakes. Additional work is on-going for model calibration and testing with data from two whole-catchment artificial acidification projects. Results obtained thus far are being prepared as manuscripts for submission to the peer-reviewed scientific literature.

  5. The intergenerational multiple deficit model and the case of dyslexia

    Directory of Open Access Journals (Sweden)

    Elsje evan Bergen

    2014-06-01

    Full Text Available Which children go on to develop dyslexia? Since dyslexia has a multifactorial aetiology, this question can be restated as: What are the factors that put children at high risk for developing dyslexia? It is argued that a useful theoretical framework to address this question is Pennington’s (2006 multiple deficit model (MDM. This model replaces models that attribute dyslexia to a single underlying cause. Subsequently, the generalist genes hypothesis for learning (disabilities (Plomin & Kovas, 2005 is described and integrated with the MDM. Finally, findings are presented from a longitudinal study with children at family risk for dyslexia. Such studies can contribute to testing and specifying the MDM. In this study, risk factors at both the child and family level were investigated. This led to the proposed intergenerational MDM, in which both parents confer liability via intertwined genetic and environmental pathways. Future scientific directions are discussed to investigate parent-offspring resemblance and transmission patterns, which will shed new light on disorder aetiology.

  6. An Advanced N -body Model for Interacting Multiple Stellar Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brož, Miroslav [Astronomical Institute of the Charles University, Faculty of Mathematics and Physics, V Holešovičkách 2, CZ-18000 Praha 8 (Czech Republic)

    2017-06-01

    We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal, a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).

  7. Negative binomial models for abundance estimation of multiple closed populations

    Science.gov (United States)

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  8. A diagnostic tree model for polytomous responses with multiple strategies.

    Science.gov (United States)

    Ma, Wenchao

    2018-04-23

    Constructed-response items have been shown to be appropriate for cognitively diagnostic assessments because students' problem-solving procedures can be observed, providing direct evidence for making inferences about their proficiency. However, multiple strategies used by students make item scoring and psychometric analyses challenging. This study introduces the so-called two-digit scoring scheme into diagnostic assessments to record both students' partial credits and their strategies. This study also proposes a diagnostic tree model (DTM) by integrating the cognitive diagnosis models with the tree model to analyse the items scored using the two-digit rubrics. Both convergent and divergent tree structures are considered to accommodate various scoring rules. The MMLE/EM algorithm is used for item parameter estimation of the DTM, and has been shown to provide good parameter recovery under varied conditions in a simulation study. A set of data from TIMSS 2007 mathematics assessment is analysed to illustrate the use of the two-digit scoring scheme and the DTM. © 2018 The British Psychological Society.

  9. Modeling Pan Evaporation for Kuwait by Multiple Linear Regression

    Science.gov (United States)

    Almedeij, Jaber

    2012-01-01

    Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984

  10. Glucose oxidase incorporated collagen matrices for dermal wound repair in diabetic rat models: a biochemical study.

    Science.gov (United States)

    Arul, V; Masilamoni, J G; Jesudason, E P; Jaji, P J; Inayathullah, M; Dicky John, D G; Vignesh, S; Jayakumar, R

    2012-05-01

    Impaired wound healing in diabetes is a well-documented phenomenon. Emerging data favor the involvement of free radicals in the pathogenesis of diabetic wound healing. We investigated the beneficial role of the sustained release of reactive oxygen species (ROS) in diabetic dermal wound healing. In order to achieve the sustained delivery of ROS in the wound bed, we have incorporated glucose oxidase in the collagen matrix (GOIC), which is applied to the healing diabetic wound. Our in vitro proteolysis studies on incorporated GOIC show increased stability against the proteases in the collagen matrix. In this study, GOIC film and collagen film (CF) are used as dressing material on the wound of streptozotocin-induced diabetic rats. A significant increase in ROS (p < 0.05) was observed in the fibroblast of GOIC group during the inflammation period compared to the CF and control groups. This elevated level up regulated the antioxidant status in the granulation tissue and improved cellular proliferation in the GOIC group. Interestingly, our biochemical parameters nitric oxide, hydroxyproline, uronic acid, protein, and DNA content in the healing wound showed that there is an increase in proliferation of cells in GOIC when compared to the control and CF groups. In addition, evidence from wound contraction and histology reveals faster healing in the GOIC group. Our observations document that GOIC matrices could be effectively used for diabetic wound healing therapy.

  11. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Science.gov (United States)

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  12. Incorporating population viability models into species status assessment and listing decisions under the U.S. Endangered Species Act

    Directory of Open Access Journals (Sweden)

    Conor P. McGowan

    2017-10-01

    Full Text Available Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises in transparent and explicit ways tailored to support decision making processes related to endangered species.

  13. Incorporating population viability models into species status assessment and listing decisions under the U.S. Endangered Species Act

    Science.gov (United States)

    McGowan, Conor P.; Allan, Nathan; Servoss, Jeff; Hedwall, Shaula J.; Wooldridge, Brian

    2017-01-01

    Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises) in transparent and explicit ways tailored to support decision making processes related to endangered species.

  14. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  15. Modeling a historical mountain pine beetle outbreak using Landsat MSS and multiple lines of evidence

    Science.gov (United States)

    Assal, Timothy J.; Sibold, Jason; Reich, Robin M.

    2014-01-01

    Mountain pine beetles are significant forest disturbance agents, capable of inducing widespread mortality in coniferous forests in western North America. Various remote sensing approaches have assessed the impacts of beetle outbreaks over the last two decades. However, few studies have addressed the impacts of historical mountain pine beetle outbreaks, including the 1970s event that impacted Glacier National Park. The lack of spatially explicit data on this disturbance represents both a major data gap and a critical research challenge in that wildfire has removed some of the evidence from the landscape. We utilized multiple lines of evidence to model forest canopy mortality as a proxy for outbreak severity. We incorporate historical aerial and landscape photos, aerial detection survey data, a nine-year collection of satellite imagery and abiotic data. This study presents a remote sensing based framework to (1) relate measurements of canopy mortality from fine-scale aerial photography to coarse-scale multispectral imagery and (2) classify the severity of mountain pine beetle affected areas using a temporal sequence of Landsat data and other landscape variables. We sampled canopy mortality in 261 plots from aerial photos and found that insect effects on mortality were evident in changes to the Normalized Difference Vegetation Index (NDVI) over time. We tested multiple spectral indices and found that a combination of NDVI and the green band resulted in the strongest model. We report a two-step process where we utilize a generalized least squares model to account for the large-scale variability in the data and a binary regression tree to describe the small-scale variability. The final model had a root mean square error estimate of 9.8% canopy mortality, a mean absolute error of 7.6% and an R2 of 0.82. The results demonstrate that a model of percent canopy mortality as a continuous variable can be developed to identify a gradient of mountain pine beetle severity on the

  16. Shared mental models of integrated care: aligning multiple stakeholder perspectives.

    Science.gov (United States)

    Evans, Jenna M; Baker, G Ross

    2012-01-01

    Health service organizations and professionals are under increasing pressure to work together to deliver integrated patient care. A common understanding of integration strategies may facilitate the delivery of integrated care across inter-organizational and inter-professional boundaries. This paper aims to build a framework for exploring and potentially aligning multiple stakeholder perspectives of systems integration. The authors draw from the literature on shared mental models, strategic management and change, framing, stakeholder management, and systems theory to develop a new construct, Mental Models of Integrated Care (MMIC), which consists of three types of mental models, i.e. integration-task, system-role, and integration-belief. The MMIC construct encompasses many of the known barriers and enablers to integrating care while also providing a comprehensive, theory-based framework of psychological factors that may influence inter-organizational and inter-professional relations. While the existing literature on integration focuses on optimizing structures and processes, the MMIC construct emphasizes the convergence and divergence of stakeholders' knowledge and beliefs, and how these underlying cognitions influence interactions (or lack thereof) across the continuum of care. MMIC may help to: explain what differentiates effective from ineffective integration initiatives; determine system readiness to integrate; diagnose integration problems; and develop interventions for enhancing integrative processes and ultimately the delivery of integrated care. Global interest and ongoing challenges in integrating care underline the need for research on the mental models that characterize the behaviors of actors within health systems; the proposed framework offers a starting point for applying a cognitive perspective to health systems integration.

  17. Simplex network modeling for press-molded ceramic bodies incorporated with granite waste

    International Nuclear Information System (INIS)

    Pedroti, L.G.; Vieira, C.M.F.; Alexandre, J.; Monteiro, S.N.; Xavier, G.C.

    2012-01-01

    Extrusion of a clay body is the most commonly applied process in the ceramic industries for manufacturing structural block. Nowadays, the assembly of such blocks through a fitting system that facilitates the final mounting is gaining attention owing to the saving in material and reducing in the cost of the building construction. In this work, the ideal composition of clay bodies incorporated with granite powder waste was investigated for the production of press-molded ceramic blocks. An experimental design was applied to determine the optimum properties and microstructures involving not only the precursors compositions but also the press and temperature conditions. Press load from 15 ton and temperatures from 850 to 1050°C were considered. The results indicated that varying mechanical strength of 2 MPa to 20 MPa and varying water absorption of 19% to 30%. (author)

  18. A new general methodology for incorporating physico-chemical transformations into multi-phase wastewater treatment process models.

    Science.gov (United States)

    Lizarralde, I; Fernández-Arévalo, T; Brouckaert, C; Vanrolleghem, P; Ikumi, D S; Ekama, G A; Ayesa, E; Grau, P

    2015-05-01

    This paper introduces a new general methodology for incorporating physico-chemical and chemical transformations into multi-phase wastewater treatment process models in a systematic and rigorous way under a Plant-Wide modelling (PWM) framework. The methodology presented in this paper requires the selection of the relevant biochemical, chemical and physico-chemical transformations taking place and the definition of the mass transport for the co-existing phases. As an example a mathematical model has been constructed to describe a system for biological COD, nitrogen and phosphorus removal, liquid-gas transfer, precipitation processes, and chemical reactions. The capability of the model has been tested by comparing simulated and experimental results for a nutrient removal system with sludge digestion. Finally, a scenario analysis has been undertaken to show the potential of the obtained mathematical model to study phosphorus recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Eye Movement Abnormalities in Multiple Sclerosis: Pathogenesis, Modeling, and Treatment

    Directory of Open Access Journals (Sweden)

    Alessandro Serra

    2018-02-01

    Full Text Available Multiple sclerosis (MS commonly causes eye movement abnormalities that may have a significant impact on patients’ disability. Inflammatory demyelinating lesions, especially occurring in the posterior fossa, result in a wide range of disorders, spanning from acquired pendular nystagmus (APN to internuclear ophthalmoplegia (INO, among the most common. As the control of eye movements is well understood in terms of anatomical substrate and underlying physiological network, studying ocular motor abnormalities in MS provides a unique opportunity to gain insights into mechanisms of disease. Quantitative measurement and modeling of eye movement disorders, such as INO, may lead to a better understanding of common symptoms encountered in MS, such as Uhthoff’s phenomenon and fatigue. In turn, the pathophysiology of a range of eye movement abnormalities, such as APN, has been clarified based on correlation of experimental model with lesion localization by neuroimaging in MS. Eye movement disorders have the potential of being utilized as structural and functional biomarkers of early cognitive deficit, and possibly help in assessing disease status and progression, and to serve as platform and functional outcome to test novel therapeutic agents for MS. Knowledge of neuropharmacology applied to eye movement dysfunction has guided testing and use of a number of pharmacological agents to treat some eye movement disorders found in MS, such as APN and other forms of central nystagmus.

  20. Electricity supply industry modelling for multiple objectives under demand growth uncertainty

    International Nuclear Information System (INIS)

    Heinrich, G.; Basson, L.; Howells, M.; Petrie, J.

    2007-01-01

    Appropriate energy-environment-economic (E3) modelling provides key information for policy makers in the electricity supply industry (ESI) faced with navigating a sustainable development path. Key challenges include engaging with stakeholder values and preferences, and exploring trade-offs between competing objectives in the face of underlying uncertainty. As a case study we represent the South African ESI using a partial equilibrium E3 modelling approach, and extend the approach to include multiple objectives under selected future uncertainties. This extension is achieved by assigning cost penalties to non-cost attributes to force the model's least-cost objective function to better satisfy non-cost criteria. This paper incorporates aspects of flexibility to demand growth uncertainty into each future expansion alternative by introducing stochastic programming with recourse into the model. Technology lead times are taken into account by the inclusion of a decision node along the time horizon where aspects of real options theory are considered within the planning process. Hedging in the recourse programming is automatically translated from being purely financial, to include the other attributes that the cost penalties represent. From a retrospective analysis of the cost penalties, the correct market signals, can be derived to meet policy goal, with due regard to demand uncertainty. (author)

  1. A new adaptive control scheme based on the interacting multiple model (IMM) estimation

    International Nuclear Information System (INIS)

    Afshari, Hamed H.; Al-Ani, Dhafar; Habibi, Saeid

    2016-01-01

    In this paper, an Interacting multiple model (IMM) adaptive estimation approach is incorporated to design an optimal adaptive control law for stabilizing an Unmanned vehicle. Due to variations of the forward velocity of the Unmanned vehicle, its aerodynamic derivatives are constantly changing. In order to stabilize the unmanned vehicle and achieve the control objectives for in-flight conditions, one seeks for an adaptive control strategy that can adjust itself to varying flight conditions. In this context, a bank of linear models is used to describe the vehicle dynamics in different operating modes. Each operating mode represents a particular dynamic with a different forward velocity. These models are then used within an IMM filter containing a bank of Kalman filters (KF) in a parallel operating mechanism. To regulate and stabilize the vehicle, a Linear quadratic regulator (LQR) law is designed and implemented for each mode. The IMM structure determines the particular mode based on the stored models and in-flight input-output measurements. The LQR controller also provides a set of controllers; each corresponds to a particular flight mode and minimizes the tracking error. Finally, the ultimate control law is obtained as a weighted summation of all individual controllers whereas weights are obtained using mode probabilities of each operating mode.

  2. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Science.gov (United States)

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  3. A Mass Balance Model for Designing Green Roof Systems that Incorporate a Cistern for Re-Use

    Directory of Open Access Journals (Sweden)

    Manoj Chopra

    2012-11-01

    Full Text Available Green roofs, which have been used for several decades in many parts of the world, offer a unique and sustainable approach to stormwater management. Within this paper, evidence is presented on water retention for an irrigated green roof system. The presented green roof design results in a water retention volume on site. A first principle mass balance computer model is introduced to assist with the design of these green roof systems which incorporate a cistern to capture and reuse runoff waters for irrigation of the green roof. The model is used to estimate yearly stormwater retention volume for different cistern storage volumes. Additionally, the Blaney and Criddle equation is evaluated for estimation of monthly evapotranspiration rates for irrigated systems and incorporated into the model. This is done so evapotranspiration rates can be calculated for regions where historical data does not exist, allowing the model to be used anywhere historical weather data are available. This model is developed and discussed within this paper as well as compared to experimental results.

  4. Incorporating Cold Cap Behavior in a Joule-heated Waste Glass Melter Model

    Energy Technology Data Exchange (ETDEWEB)

    Varija Agarwal; Donna Post Guillen

    2013-08-01

    In this paper, an overview of Joule-heated waste glass melters used in the vitrification of high level waste (HLW) is presented, with a focus on the cold cap region. This region, in which feed-to-glass conversion reactions occur, is critical in determining the melting properties of any given glass melter. An existing 1D computer model of the cold cap, implemented in MATLAB, is described in detail. This model is a standalone model that calculates cold cap properties based on boundary conditions at the top and bottom of the cold cap. Efforts to couple this cold cap model with a 3D STAR-CCM+ model of a Joule-heated melter are then described. The coupling is being implemented in ModelCenter, a software integration tool. The ultimate goal of this model is to guide the specification of melter parameters that optimize glass quality and production rate.

  5. Incorporating shape constraints in generalized additive modelling of the height-diameter relationship for Norway spruce

    Directory of Open Access Journals (Sweden)

    Natalya Pya

    2016-02-01

    Full Text Available Background: Measurements of tree heights and diameters are essential in forest assessment and modelling. Tree heights are used for estimating timber volume, site index and other important variables related to forest growth and yield, succession and carbon budget models. However, the diameter at breast height (dbh can be more accurately obtained and at lower cost, than total tree height. Hence, generalized height-diameter (h-d models that predict tree height from dbh, age and other covariates are needed. For a more flexible but biologically plausible estimation of covariate effects we use shape constrained generalized additive models as an extension of existing h-d model approaches. We use causal site parameters such as index of aridity to enhance the generality and causality of the models and to enable predictions under projected changeable climatic conditions. Methods: We develop unconstrained generalized additive models (GAM and shape constrained generalized additive models (SCAM for investigating the possible effects of tree-specific parameters such as tree age, relative diameter at breast height, and site-specific parameters such as index of aridity and sum of daily mean temperature during vegetation period, on the h-d relationship of forests in Lower Saxony, Germany. Results: Some of the derived effects, e.g. effects of age, index of aridity and sum of daily mean temperature have significantly non-linear pattern. The need for using SCAM results from the fact that some of the model effects show partially implausible patterns especially at the boundaries of data ranges. The derived model predicts monotonically increasing levels of tree height with increasing age and temperature sum and decreasing aridity and social rank of a tree within a stand. The definition of constraints leads only to marginal or minor decline in the model statistics like AIC. An observed structured spatial trend in tree height is modelled via 2-dimensional surface

  6. A selenium-deficient Caco-2 cell model for assessing differential incorporation of chemical or food selenium into glutathione peroxidase.

    Science.gov (United States)

    Zeng, Huawei; Botnen, James H; Johnson, Luann K

    2008-01-01

    Assessing the ability of a selenium (Se) sample to induce cellular glutathione peroxidase (GPx) activity in Se-deficient animals is the most commonly used method to determine Se bioavailability. Our goal is to establish a Se-deficient cell culture model with differential incorporation of Se chemical forms into GPx, which may complement the in vivo studies. In the present study, we developed a Se-deficient Caco-2 cell model with a serum gradual reduction method. It is well recognized that selenomethionine (SeMet) is the major nutritional source of Se; therefore, SeMet, selenite, or methylselenocysteine (SeMSC) was added to cell culture media with different concentrations and treatment time points. We found that selenite and SeMSC induced GPx more rapidly than SeMet. However, SeMet was better retained as it is incorporated into proteins in place of methionine; compared with 8-, 24-, or 48-h treatment, 72-h Se treatment was a more sensitive time point to measure the potential of GPx induction in all tested concentrations. Based on induction of GPx activity, the cellular bioavailability of Se from an extract of selenobroccoli after a simulated gastrointestinal digestion was comparable with that of SeMSC and SeMet. These in vitro data are, for the first time, consistent with previous published data regarding selenite and SeMet bioavailability in animal models and Se chemical speciation studies with broccoli. Thus, Se-deficient Caco-2 cell model with differential incorporation of chemical or food forms of Se into GPx provides a new tool to study the cellular mechanisms of Se bioavailability.

  7. Integrative modelling of animal movement: incorporating in situ habitat and behavioural information for a migratory marine predator.

    Science.gov (United States)

    Bestley, Sophie; Jonsen, Ian D; Hindell, Mark A; Guinet, Christophe; Charrassin, Jean-Benoît

    2013-01-07

    A fundamental goal in animal ecology is to quantify how environmental (and other) factors influence individual movement, as this is key to understanding responsiveness of populations to future change. However, quantitative interpretation of individual-based telemetry data is hampered by the complexity of, and error within, these multi-dimensional data. Here, we present an integrative hierarchical Bayesian state-space modelling approach where, for the first time, the mechanistic process model for the movement state of animals directly incorporates both environmental and other behavioural information, and observation and process model parameters are estimated within a single model. When applied to a migratory marine predator, the southern elephant seal (Mirounga leonina), we find the switch from directed to resident movement state was associated with colder water temperatures, relatively short dive bottom time and rapid descent rates. The approach presented here can have widespread utility for quantifying movement-behaviour (diving or other)-environment relationships across species and systems.

  8. Mathematical Modeling of Loop Heat Pipes with Multiple Capillary Pumps and Multiple Condensers. Part 1; Stead State Stimulations

    Science.gov (United States)

    Hoang, Triem T.; OConnell, Tamara; Ku, Jentung

    2004-01-01

    Loop Heat Pipes (LHPs) have proven themselves as reliable and robust heat transport devices for spacecraft thermal control systems. So far, the LHPs in earth-orbit satellites perform very well as expected. Conventional LHPs usually consist of a single capillary pump for heat acquisition and a single condenser for heat rejection. Multiple pump/multiple condenser LHPs have shown to function very well in ground testing. Nevertheless, the test results of a dual pump/condenser LHP also revealed that the dual LHP behaved in a complicated manner due to the interaction between the pumps and condensers. Thus it is redundant to say that more research is needed before they are ready for 0-g deployment. One research area that perhaps compels immediate attention is the analytical modeling of LHPs, particularly the transient phenomena. Modeling a single pump/single condenser LHP is difficult enough. Only a handful of computer codes are available for both steady state and transient simulations of conventional LHPs. No previous effort was made to develop an analytical model (or even a complete theory) to predict the operational behavior of the multiple pump/multiple condenser LHP systems. The current research project offered a basic theory of the multiple pump/multiple condenser LHP operation. From it, a computer code was developed to predict the LHP saturation temperature in accordance with the system operating and environmental conditions.

  9. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  10. Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model

    DEFF Research Database (Denmark)

    Olivares Hernandez, Roberto

    Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...

  11. Quantitative analysis of CT brain images: a statistical model incorporating partial volume and beam hardening effects

    International Nuclear Information System (INIS)

    McLoughlin, R.F.; Ryan, M.V.; Heuston, P.M.; McCoy, C.T.; Masterson, J.B.

    1992-01-01

    The purpose of this study was to construct and evaluate a statistical model for the quantitative analysis of computed tomographic brain images. Data were derived from standard sections in 34 normal studies. A model representing the intercranial pure tissue and partial volume areas, with allowance for beam hardening, was developed. The average percentage error in estimation of areas, derived from phantom tests using the model, was 28.47%. We conclude that our model is not sufficiently accurate to be of clinical use, even though allowance was made for partial volume and beam hardening effects. (author)

  12. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.

    2011-01-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models

  13. A mathematical model for maximizing the value of phase 3 drug development portfolios incorporating budget constraints and risk.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa

    2013-05-10

    We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  15. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    Science.gov (United States)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  16. Incorporating technology buying behaviour into UK-based long term domestic stock energy models to provide improved policy analysis

    International Nuclear Information System (INIS)

    Lee, Timothy; Yao, Runming

    2013-01-01

    The UK has a target for an 80% reduction in CO 2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures. - Highlights: ► Long term energy models are reviewed with a focus on UK domestic stock models. ► Existing models are found weak in modelling green technology buying behaviour. ► Agent models, Markov chains and neural networks are considered as solutions. ► Agent-based modelling (ABM) is found to be the most promising approach. ► A prototype ABM is developed and testing indicates a lot of potential.

  17. Incorporating NDVI in a gravity model setting to describe spatio-temporal patterns of Lyme borreliosis incidence

    Science.gov (United States)

    Barrios, J. M.; Verstraeten, W. W.; Farifteh, J.; Maes, P.; Aerts, J. M.; Coppin, P.

    2012-04-01

    Lyme borreliosis (LB) is the most common tick-borne disease in Europe and incidence growth has been reported in several European countries during the last decade. LB is caused by the bacterium Borrelia burgdorferi and the main vector of this pathogen in Europe is the tick Ixodes ricinus. LB incidence and spatial spread is greatly dependent on environmental conditions impacting habitat, demography and trophic interactions of ticks and the wide range of organisms ticks parasite. The landscape configuration is also a major determinant of tick habitat conditions and -very important- of the fashion and intensity of human interaction with vegetated areas, i.e. human exposure to the pathogen. Hence, spatial notions as distance and adjacency between urban and vegetated environments are related to human exposure to tick bites and, thus, to risk. This work tested the adequacy of a gravity model setting to model the observed spatio-temporal pattern of LB as a function of location and size of urban and vegetated areas and the seasonal and annual change in the vegetation dynamics as expressed by MODIS NDVI. Opting for this approach implies an analogy with Newton's law of universal gravitation in which the attraction forces between two bodies are directly proportional to the bodies mass and inversely proportional to distance. Similar implementations have proven useful in fields like trade modeling, health care service planning, disease mapping among other. In our implementation, the size of human settlements and vegetated systems and the distance separating these landscape elements are considered the 'bodies'; and the 'attraction' between them is an indicator of exposure to pathogen. A novel element of this implementation is the incorporation of NDVI to account for the seasonal and annual variation in risk. The importance of incorporating this indicator of vegetation activity resides in the fact that alterations of LB incidence pattern observed the last decade have been ascribed

  18. On Rationality of Decision Models Incorporating Emotion-Related Valuing and Hebbian Learning

    NARCIS (Netherlands)

    Treur, J.; Umair, M.

    2011-01-01

    In this paper an adaptive decision model based on predictive loops through feeling states is analysed from the perspective of rationality. Four different variations of Hebbian learning are considered for different types of connections in the decision model. To assess the extent of rationality, a

  19. Approaches to incorporating climate change effects in state and transition simulation models of vegetation

    Science.gov (United States)

    Becky K. Kerns; Miles A. Hemstrom; David Conklin; Gabriel I. Yospin; Bart Johnson; Dominique Bachelet; Scott Bridgham

    2012-01-01

    Understanding landscape vegetation dynamics often involves the use of scientifically-based modeling tools that are capable of testing alternative management scenarios given complex ecological, management, and social conditions. State-and-transition simulation model (STSM) frameworks and software such as PATH and VDDT are commonly used tools that simulate how landscapes...

  20. Incorporating additional tree and environmental variables in a lodgepole pine stem profile model

    Science.gov (United States)

    John C. Byrne

    1993-01-01

    A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...

  1. Mathematical Modelling in the Junior Secondary Years: An Approach Incorporating Mathematical Technology

    Science.gov (United States)

    Lowe, James; Carter, Merilyn; Cooper, Tom

    2018-01-01

    Mathematical models are conceptual processes that use mathematics to describe, explain, and/or predict the behaviour of complex systems. This article is written for teachers of mathematics in the junior secondary years (including out-of-field teachers of mathematics) who may be unfamiliar with mathematical modelling, to explain the steps involved…

  2. Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency

    Science.gov (United States)

    Su, Shiyang

    2017-01-01

    With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…

  3. Incorporating Video Modeling into a School-Based Intervention for Students with Autism Spectrum Disorders

    Science.gov (United States)

    Wilson, Kaitlyn P.

    2013-01-01

    Purpose: Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language…

  4. LINKING MICROBES TO CLIMATE: INCORPORATING MICROBIAL ACTIVITY INTO CLIMATE MODELS COLLOQUIUM

    Energy Technology Data Exchange (ETDEWEB)

    DeLong, Edward; Harwood, Caroline; Reid, Ann

    2011-01-01

    This report explains the connection between microbes and climate, discusses in general terms what modeling is and how it applied to climate, and discusses the need for knowledge in microbial physiology, evolution, and ecology to contribute to the determination of fluxes and rates in climate models. It recommends with a multi-pronged approach to address the gaps.

  5. Stabilization of multiple rib fractures in a canine model.

    Science.gov (United States)

    Huang, Ke-Nan; Xu, Zhi-Fei; Sun, Ju-Xian; Ding, Xin-Yu; Wu, Bin; Li, Wei; Qin, Xiong; Tang, Hua

    2014-12-01

    Operative stabilization is frequently used in the clinical treatment of multiple rib fractures (MRF); however, no ideal material exists for use in this fixation. This study investigates a newly developed biodegradable plate system for the stabilization of MRF. Silk fiber-reinforced polycaprolactone (SF/PCL) plates were developed for rib fracture stabilization and studied using a canine flail chest model. Adult mongrel dogs were divided into three groups: one group received the SF/PCL plates, one group received standard clinical steel plates, and the final group did not undergo operative fracture stabilization (n = 6 for each group). Radiographic, mechanical, and histologic examination was performed to evaluate the effectiveness of the biodegradable material for the stabilization of the rib fractures. No nonunion and no infections were found when using SF-PCL plates. The fracture sites collapsed in the untreated control group, leading to obvious chest wall deformity not encountered in the two groups that underwent operative stabilization. Our experimental study shows that the SF/PCL plate has the biocompatibility and mechanical strength suitable for fixation of MRF and is potentially ideal for the treatment of these injuries. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Multiplicative multifractal modeling and discrimination of human neuronal activity

    International Nuclear Information System (INIS)

    Zheng Yi; Gao Jianbo; Sanchez, Justin C.; Principe, Jose C.; Okun, Michael S.

    2005-01-01

    Understanding neuronal firing patterns is one of the most important problems in theoretical neuroscience. It is also very important for clinical neurosurgery. In this Letter, we introduce a computational procedure to examine whether neuronal firing recordings could be characterized by cascade multiplicative multifractals. By analyzing raw recording data as well as generated spike train data from 3 patients collected in two brain areas, the globus pallidus externa (GPe) and the globus pallidus interna (GPi), we show that the neural firings are consistent with a multifractal process over certain time scale range (t 1 ,t 2 ), where t 1 is argued to be not smaller than the mean inter-spike-interval of neuronal firings, while t 2 may be related to the time that neuronal signals propagate in the major neural branching structures pertinent to GPi and GPe. The generalized dimension spectrum D q effectively differentiates the two brain areas, both intra- and inter-patients. For distinguishing between GPe and GPi, it is further shown that the cascade model is more effective than the methods recently examined by Schiff et al. as well as the Fano factor analysis. Therefore, the methodology may be useful in developing computer aided tools to help clinicians perform precision neurosurgery in the operating room

  7. A Fuzzy Logic Framework for Integrating Multiple Learned Models

    Energy Technology Data Exchange (ETDEWEB)

    Hartog, Bobi Kai Den [Univ. of Nebraska, Lincoln, NE (United States)

    1999-03-01

    The Artificial Intelligence field of Integrating Multiple Learned Models (IMLM) explores ways to combine results from sets of trained programs. Aroclor Interpretation is an ill-conditioned problem in which trained programs must operate in scenarios outside their training ranges because it is intractable to train them completely. Consequently, they fail in ways related to the scenarios. We developed a general-purpose IMLM solution, the Combiner, and applied it to Aroclor Interpretation. The Combiner's first step, Scenario Identification (M), learns rules from very sparse, synthetic training data consisting of results from a suite of trained programs called Methods. S1 produces fuzzy belief weights for each scenario by approximately matching the rules. The Combiner's second step, Aroclor Presence Detection (AP), classifies each of three Aroclors as present or absent in a sample. The third step, Aroclor Quantification (AQ), produces quantitative values for the concentration of each Aroclor in a sample. AP and AQ use automatically learned empirical biases for each of the Methods in each scenario. Through fuzzy logic, AP and AQ combine scenario weights, automatically learned biases for each of the Methods in each scenario, and Methods' results to determine results for a sample.

  8. Multiple models guide strategies for agricultural nutrient reductions

    Science.gov (United States)

    Scavia, Donald; Kalcic, Margaret; Muenich, Rebecca Logsdon; Read, Jennifer; Aloysius, Noel; Bertani, Isabella; Boles, Chelsie; Confesor, Remegio; DePinto, Joseph; Gildow, Marie; Martin, Jay; Redder, Todd; Robertson, Dale M.; Sowa, Scott P.; Wang, Yu-Chen; Yen, Haw

    2017-01-01

    In response to degraded water quality, federal policy makers in the US and Canada called for a 40% reduction in phosphorus (P) loads to Lake Erie, and state and provincial policy makers in the Great Lakes region set a load-reduction target for the year 2025. Here, we configured five separate SWAT (US Department of Agriculture's Soil and Water Assessment Tool) models to assess load reduction strategies for the agriculturally dominated Maumee River watershed, the largest P source contributing to toxic algal blooms in Lake Erie. Although several potential pathways may achieve the target loads, our results show that any successful pathway will require large-scale implementation of multiple practices. For example, one successful pathway involved targeting 50% of row cropland that has the highest P loss in the watershed with a combination of three practices: subsurface application of P fertilizers, planting cereal rye as a winter cover crop, and installing buffer strips. Achieving these levels of implementation will require local, state/provincial, and federal agencies to collaborate with the private sector to set shared implementation goals and to demand innovation and honest assessments of water quality-related programs, policies, and partnerships.

  9. Calcium Intervention Ameliorates Experimental Model of Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Dariush Haghmorad

    2014-05-01

    Full Text Available Objective: Multiple sclerosis (MS is the most common inflammatory disease of the CNS. Experimental autoimmune encephalomyelitis (EAE is a widely used model for MS. In the present research, our aim was to test the therapeutic efficacy of Calcium (Ca in an experimental model of MS. Methods: In this study the experiment was done on C57BL/6 mice. EAE was induced using 200 μg of the MOG35-55 peptide emulsified in CFA and injected subcutaneously on day 0 over two flank areas. In addition, 250 ng of pertussis toxin was injected on days 0 and 2. In the treatment group, 30 mg/kg Ca was administered intraperitoneally four times at regular 48 hour intervals. The mice were sacrificed 21 days after EAE induction and blood samples were taken from their hearts. The brains of mice were removed for histological analysis and their isolated splenocytes were cultured. Results: Our results showed that treatment with Ca caused a significant reduction in the severity of the EAE. Histological analysis indicated that there was no plaque in brain sections of Ca treated group of mice whereas 4 ± 1 plaques were detected in brain sections of controls. The density of mononuclear infiltration in the CNS of Ca treated mice was lower than in controls. The serum level of Nitric Oxide in the treatment group was lower than in the control group but was not significant. Moreover, the levels of IFN-γ in cell culture supernatant of splenocytes in treated mice were significantly lower than in the control group. Conclusion: The data indicates that Ca intervention can effectively attenuate EAE progression.

  10. A model for arsenic anti-site incorporation in GaAs grown by hydride vapor phase epitaxy

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, K. L.; Kuech, T. F. [Department of Chemical and Biological Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States)

    2014-12-28

    GaAs growth by hydride vapor phase epitaxy (HVPE) has regained interest as a potential route to low cost, high efficiency thin film photovoltaics. In order to attain the highest efficiencies, deep level defect incorporation in these materials must be understood and controlled. The arsenic anti-site defect, As{sub Ga} or EL2, is the predominant deep level defect in HVPE-grown GaAs. In the present study, the relationships between HVPE growth conditions and incorporation of EL2 in GaAs epilayers were determined. Epitaxial n-GaAs layers were grown under a wide range of deposition temperatures (T{sub D}) and gallium chloride partial pressures (P{sub GaCl}), and the EL2 concentration, [EL2], was determined by deep level transient spectroscopy. [EL2] agreed with equilibrium thermodynamic predictions in layers grown under conditions in which the growth rate, R{sub G}, was controlled by conditions near thermodynamic equilibrium. [EL2] fell below equilibrium levels when R{sub G} was controlled by surface kinetic processes, with the disparity increasing as R{sub G} decreased. The surface chemical composition during growth was determined to have a strong influence on EL2 incorporation. Under thermodynamically limited growth conditions, e.g., high T{sub D} and/or low P{sub GaCl}, the surface vacancy concentration was high and the bulk crystal was close to equilibrium with the vapor phase. Under kinetically limited growth conditions, e.g., low T{sub D} and/or high P{sub GaCl}, the surface attained a high GaCl coverage, blocking As adsorption. This competitive adsorption process reduced the growth rate and also limited the amount of arsenic that incorporated as As{sub Ga}. A defect incorporation model which accounted for the surface concentration of arsenic as a function of the growth conditions, was developed. This model was used to identify optimal growth parameters for the growth of thin films for photovoltaics, conditions in which a high growth rate and low [EL2] could be

  11. Incorporation of Plasticity and Damage Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Rajan,Subramaniam; Blackenhorn, Gunther

    2015-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within commercial transient dynamic finite element codes, several features have been identified as being lacking in the currently available material models that could substantially enhance the predictive capability of the impact simulations. A specific desired feature pertains to the incorporation of both plasticity and damage within the material model. Another desired feature relates to using experimentally based tabulated stress-strain input to define the evolution of plasticity and damage as opposed to specifying discrete input properties (such as modulus and strength) and employing analytical functions to track the response of the material. To begin to address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed for implementation within the commercial code LS-DYNA. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. The effective plastic strain is computed by using the non-associative flow rule in combination with appropriate numerical methods. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used, in which a load in one direction results in a stiffness reduction in multiple coordinate directions. A specific laminated composite is examined to demonstrate the process of characterizing and analyzing the response of a composite using the developed

  12. A LabVIEW model incorporating an open-loop arterial impedance and a closed-loop circulatory system.

    Science.gov (United States)

    Cole, R T; Lucas, C L; Cascio, W E; Johnson, T A

    2005-11-01

    While numerous computer models exist for the circulatory system, many are limited in scope, contain unwanted features or incorporate complex components specific to unique experimental situations. Our purpose was to develop a basic, yet multifaceted, computer model of the left heart and systemic circulation in LabVIEW having universal appeal without sacrificing crucial physiologic features. The program we developed employs Windkessel-type impedance models in several open-loop configurations and a closed-loop model coupling a lumped impedance and ventricular pressure source. The open-loop impedance models demonstrate afterload effects on arbitrary aortic pressure/flow inputs. The closed-loop model catalogs the major circulatory waveforms with changes in afterload, preload, and left heart properties. Our model provides an avenue for expanding the use of the ventricular equations through closed-loop coupling that includes a basic coronary circuit. Tested values used for the afterload components and the effects of afterload parameter changes on various waveforms are consistent with published data. We conclude that this model offers the ability to alter several circulatory factors and digitally catalog the most salient features of the pressure/flow waveforms employing a user-friendly platform. These features make the model a useful instructional tool for students as well as a simple experimental tool for cardiovascular research.

  13. A ¤flexible additive multiplicative hazard model

    DEFF Research Database (Denmark)

    Martinussen, T.; Scheike, T. H.

    2002-01-01

    Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect......Aalen's additive model; Counting process; Cox regression; Hazard model; Proportional excess harzard model; Time-varying effect...

  14. Incorporating Daily Flood Control Objectives Into a Monthly Stochastic Dynamic Programing Model for a Hydroelectric Complex

    Science.gov (United States)

    Druce, Donald J.

    1990-01-01

    A monthly stochastic dynamic programing model was recently developed and implemented at British Columbia (B.C.) Hydro to provide decision support for short-term energy exports and, if necessary, for flood control on the Peace River in northern British Columbia. The model establishes the marginal cost of supplying energy from the B.C. Hydro system, as well as a monthly operating policy for the G.M. Shrum and Peace Canyon hydroelectric plants and the Williston Lake storage reservoir. A simulation model capable of following the operating policy then determines the probability of refilling Williston Lake and possible spill rates and volumes. Reservoir inflows are input to both models in daily and monthly formats. The results indicate that flood control can be accommodated without sacrificing significant export revenue.

  15. Incorporating daily flood control objectives into a monthly stochastic dynamic programming model for a hydroelectric complex

    Energy Technology Data Exchange (ETDEWEB)

    Druce, D.J. (British Columbia Hydro and Power Authority, Vancouver, British Columbia (Canada))

    1990-01-01

    A monthly stochastic dynamic programing model was recently developed and implemented at British Columbia (B.C.) Hydro to provide decision support for short-term energy exports and, if necessary, for flood control on the Peace River in northern British Columbia. The model established the marginal cost of supplying energy from the B.C. Hydro system, as well as a monthly operating policy for the G.M. Shrum and Peace Canyon hydroelectric plants and the Williston Lake storage reservoir. A simulation model capable of following the operating policy then determines the probability of refilling Williston Lake and possible spill rates and volumes. Reservoir inflows are input to both models in daily and monthly formats. The results indicate that flood control can be accommodated without sacrificing significant export revenue.

  16. Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification

    DEFF Research Database (Denmark)

    Sarkar, Achintya Kumar; Tan, Zheng-Hua

    2018-01-01

    -dependent. We show that the proposed method significantly reduces the error rates of text-dependent speaker verification for the non-target types: target-wrong and impostor-wrong while it maintains comparable TD-SV performance when impostors speak a correct utterance with respect to the conventional system......In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using...... the utterances of a particular pass-phrase. During training, pass-phrase specific target speaker models are derived from the particular PBM using the training data for the respective target model. While testing, the best PBM is first selected for the test utterance in the maximum likelihood (ML) sense...

  17. Incorporation of sedimentological data into a calibrated groundwater flow and transport model

    International Nuclear Information System (INIS)

    Williams, N.J.; Young, S.C.; Barton, D.H.; Hurst, B.T.

    1997-01-01

    Analysis suggests that a high hydraulic conductivity (K) zone is associated with a former river channel at the Portsmouth Gaseous Diffusion Plant (PORTS). A two-dimensional (2-D) and three-dimensional (3-D) groundwater flow model was developed base on a sedimentological model to demonstrate the performance of a horizontal well for plume capture. The model produced a flow field with magnitudes and directions consistent with flow paths inferred from historical trichloroethylene (TCE) plume data. The most dominant feature affecting the well's performance was preferential high- and low-K zones. Based on results from the calibrated flow and transport model, a passive groundwater collection system was designed and built. Initial flow rates and concentrations measured from a gravity-drained horizontal well agree closely to predicted values

  18. Improving the phenotype predictions of a yeast genome-scale metabolic model by incorporating enzymatic constraints

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Zhang, Xi-Cheng; Nilsson, Avlant

    2017-01-01

    , which act as limitations on metabolic fluxes, are not taken into account. Here, we present GECKO, a method that enhances a GEM to account for enzymes as part of reactions, thereby ensuring that each metabolic flux does not exceed its maximum capacity, equal to the product of the enzyme's abundance...... and turnover number. We applied GECKO to a Saccharomyces cerevisiae GEM and demonstrated that the new model could correctly describe phenotypes that the previous model could not, particularly under high enzymatic pressure conditions, such as yeast growing on different carbon sources in excess, coping...... with stress, or overexpressing a specific pathway. GECKO also allows to directly integrate quantitative proteomics data; by doing so, we significantly reduced flux variability of the model, in over 60% of metabolic reactions. Additionally, the model gives insight into the distribution of enzyme usage between...

  19. Radmap: ''as-built'' cad models incorporating geometrical, radiological and material information

    International Nuclear Information System (INIS)

    Piotrowski, L.; Lubawy, J.L.

    2001-01-01

    EDF intends to achieve successful and cost-effective dismantling of its obsolete nuclear plants. To reach this goal, EDF is currently extending its ''as-built'' 3-D modelling system to also include the location and characteristics of gamma sources in the geometrical models of its nuclear installations. The resulting system (called RADMAP) is a complete CAD chain covering 3-D and gamma data acquisitions, CAD modelling and exploitation of the final model. Its aim is to describe completely the geometrical and radiological state of a particular nuclear environment. This paper presents an overall view of RADMAP. The technical and functional characteristics of each element of the chain are indicated and illustrated using real (EDF) environments/applications. (author)

  20. Incorporation of the time aspect into the liability-threshold model for case-control-family data

    DEFF Research Database (Denmark)

    Cederkvist, Luise; Holst, Klaus K.; Andersen, Klaus K.

    2017-01-01

    to estimates that are difficult to interpret and are potentially biased. We incorporate the time aspect into the liability-threshold model for case-control-family data following the same approach that has been applied in the twin setting. Thus, the data are considered as arising from a competing risks setting...... approach using simulation studies and apply it in the analysis of two Danish register-based case-control-family studies: one on cancer diagnosed in childhood and adolescence, and one on early-onset breast cancer....

  1. On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics

    Directory of Open Access Journals (Sweden)

    Jiang Gangyi

    2010-01-01

    Full Text Available The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.

  2. A qualitative comparison of fire spread models incorporating wind and slope effects

    Science.gov (United States)

    David R. Weise; Gregory S. Biging

    1997-01-01

    Wind velocity and slope are two critical variables that affect wildland fire rate of spread. The effects of these variables on rate of spread are often combined in rate-of-spread models using vector addition. The various methods used to combine wind and slope effects have seldom been validated or compared due to differences in the models or to lack of data. In this...

  3. Simulation of a severe convective storm using a numerical model with explicitly incorporated aerosols

    Science.gov (United States)

    Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje

    2017-09-01

    Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.

  4. Debris flow analysis with a one dimensional dynamic run-out model that incorporates entrained material

    Science.gov (United States)

    Luna, Byron Quan; Remaître, Alexandre; van Asch, Theo; Malet, Jean-Philippe; van Westen, Cees

    2010-05-01

    Estimating the magnitude and the intensity of rapid landslides like debris flows is fundamental to evaluate quantitatively the hazard in a specific location. Intensity varies through the travelled course of the flow and can be described by physical features such as deposited volume, velocities, height of the flow, impact forces and pressures. Dynamic run-out models are able to characterize the distribution of the material, its intensity and define the zone where the elements will experience an impact. These models can provide valuable inputs for vulnerability and risk calculations. However, most dynamic run-out models assume a constant volume during the motion of the flow, ignoring the important role of material entrained along its path. Consequently, they neglect that the increase of volume enhances the mobility of the flow and can significantly influence the size of the potential impact area. An appropriate erosion mechanism needs to be established in the analyses of debris flows that will improve the results of dynamic modeling and consequently the quantitative evaluation of risk. The objective is to present and test a simple 1D debris flow model with a material entrainment concept based on limit equilibrium considerations and the generation of excess pore water pressure through undrained loading of the in situ bed material. The debris flow propagation model is based on a one dimensional finite difference solution of a depth-averaged form of the Navier-Stokes equations of fluid motions. The flow is treated as a laminar one phase material, which behavior is controlled by a visco-plastic Coulomb-Bingham rheology. The model parameters are evaluated and the model performance is tested on a debris flow event that occurred in 2003 in the Faucon torrent (Southern French Alps).

  5. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  6. Creating a process for incorporating epidemiological modelling into outbreak management decisions.

    Science.gov (United States)

    Akselrod, Hana; Mercon, Monica; Kirkeby Risoe, Petter; Schlegelmilch, Jeffrey; McGovern, Joanne; Bogucki, Sandy

    2012-01-01

    Modern computational models of infectious diseases greatly enhance our ability to understand new infectious threats and assess the effects of different interventions. The recently-released CDC Framework for Preventing Infectious Diseases calls for increased use of predictive modelling of epidemic emergence for public health preparedness. Currently, the utility of these technologies in preparedness and response to outbreaks is limited by gaps between modelling output and information requirements for incident management. The authors propose an operational structure that will facilitate integration of modelling capabilities into action planning for outbreak management, using the Incident Command System (ICS) and Synchronization Matrix framework. It is designed to be adaptable and scalable for use by state and local planners under the National Response Framework (NRF) and Emergency Support Function #8 (ESF-8). Specific epidemiological modelling requirements are described, and integrated with the core processes for public health emergency decision support. These methods can be used in checklist format to align prospective or real-time modelling output with anticipated decision points, and guide strategic situational assessments at the community level. It is anticipated that formalising these processes will facilitate translation of the CDC's policy guidance from theory to practice during public health emergencies involving infectious outbreaks.

  7. A new mathematical model of gastrointestinal transit incorporating age- and gender-dependent physiological parameters

    International Nuclear Information System (INIS)

    Stubbs, J.B.

    1992-01-01

    As part of the revision by the International Commission on Radiological Protection (ICRP) of its report on Reference Man, an extensive review of the literature regarding anatomy and morphology of the gastrointestinal (GI) tract has been completed. Data on age- and gender-dependent GI physiology and motility may be included in the proposed ICRP report. A new mathematical model describing the transit of substances through the GI tract as well as the absorption and secretion of material in the GI tract has been developed. This mathematical description of GI tract kinetics utilizes more physiologically accurate transit processes than the mathematically simple, but nonphysiological, GI tract model that was used in ICRP Report 30. The proposed model uses a combination of zero- and first-order kinetics to describe motility. Some of the physiological parameters that the new model accounts for include sex, age, pathophysiological condition and meal phase (solid versus liquid). A computer algorithm, written in BASIC, based on this new model has been derived and results are compared to those of the ICRP-30 model

  8. Enhanced stability of car-following model upon incorporation of short-term driving memory

    Science.gov (United States)

    Liu, Da-Wei; Shi, Zhong-Ke; Ai, Wen-Huan

    2017-06-01

    Based on the full velocity difference model, a new car-following model is developed to investigate the effect of short-term driving memory on traffic flow in this paper. Short-term driving memory is introduced as the influence factor of driver's anticipation behavior. The stability condition of the newly developed model is derived and the modified Korteweg-de Vries (mKdV) equation is constructed to describe the traffic behavior near the critical point. Via numerical method, evolution of a small perturbation is investigated firstly. The results show that the improvement of this new car-following model over the previous ones lies in the fact that the new model can improve the traffic stability. Starting and breaking processes of vehicles in the signalized intersection are also investigated. The numerical simulations illustrate that the new model can successfully describe the driver's anticipation behavior, and that the efficiency and safety of the vehicles passing through the signalized intersection are improved by considering short-term driving memory.

  9. The Answering Process for Multiple-Choice Questions in Collaborative Learning: A Mathematical Learning Model Analysis

    Science.gov (United States)

    Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro

    2014-01-01

    In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…

  10. A durability model incorporating safe life methodology and damage tolerance approach to assess first inspection and maintenance period for structures

    International Nuclear Information System (INIS)

    Xiong, J.J.; Shenoi, R.A.

    2009-01-01

    This paper outlines a new durability model to assess the first inspection and maintenance period for structures. Practical scatter factor formulae are presented to determine the safe fatigue crack initiation and propagation lives from the results of a single full-scale test of a complete structure. New theoretical solutions are proposed to determine the s a -s m -N surfaces of fatigue crack initiation and propagation. Prediction techniques are then developed to establish the relationship equation between safe fatigue crack initiation and propagation lives with a specific reliability level using a two-stage fatigue damage cumulative rule. A new durability model incorporating safe life and damage tolerance design approaches is derived to assess the first inspection and maintenance period. Finally, the proposed models are applied to assess the first inspection and maintenance period of a fastening structure at the root of helicopter blade.

  11. A durability model incorporating safe life methodology and damage tolerance approach to assess first inspection and maintenance period for structures

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, J.J. [Aircraft Department, Beihang University, Beijing 100083 (China); Shenoi, R.A. [School of Engineering Sciences, University of Southampton, Southampton SO17 1BJ (United Kingdom)], E-mail: r.a.shenoi@ship.soton.ac.uk

    2009-08-15

    This paper outlines a new durability model to assess the first inspection and maintenance period for structures. Practical scatter factor formulae are presented to determine the safe fatigue crack initiation and propagation lives from the results of a single full-scale test of a complete structure. New theoretical solutions are proposed to determine the s{sub a}-s{sub m}-N surfaces of fatigue crack initiation and propagation. Prediction techniques are then developed to establish the relationship equation between safe fatigue crack initiation and propagation lives with a specific reliability level using a two-stage fatigue damage cumulative rule. A new durability model incorporating safe life and damage tolerance design approaches is derived to assess the first inspection and maintenance period. Finally, the proposed models are applied to assess the first inspection and maintenance period of a fastening structure at the root of helicopter blade.

  12. Computer calculation of neutron cross sections with Hauser-Feshbach code STAPRE incorporating the hybrid pre-compound emission model

    International Nuclear Information System (INIS)

    Ivascu, M.

    1983-10-01

    Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations

  13. A model for determination of human foetus irradiation during intrauterine development when the mother incorporates iodine 131

    International Nuclear Information System (INIS)

    Vasilev, V.; Doncheva, B.

    1989-01-01

    A model is presented for irradiation calculation of human foetus during weeks 8-15 of the intrauterine development, when the mother chronically incorporates iodine 131. This period is critical for the nervous system of the foetus. Compared to some other author's models, the method proposed eliminates some uncertainties and takes into account the changes in the activity of mother's thyroid in time. The model is built on the base of data from 131 I-kinetics of pregnant women and experimental mice. A formula is proposed for total foetus irradiation calculation including: the internal γ and β irradiation; the external γ and β irradiation from the mother as a whole; and the external γ irradiation from the mother's thyroid

  14. A novel usage of hydrogen treatment to improve the indium incorporation and internal quantum efficiency of green InGaN/GaN multiple quantum wells simultaneously

    International Nuclear Information System (INIS)

    Ren, Peng; Zhang, Ning; Xue, Bin; Liu, Zhe; Wang, Junxi; Li, Jinmin

    2016-01-01

    The challenge for improving the internal quantum efficiency (IQE) of InGaN-based light emitting diodes (LED) in the green light range is referred to as the ‘green gap’. However the IQE of InGaN-based LEDs often drops when the emission peak wavelength is adjusted through reducing the growth temperature. Although hydrogen (H 2 ) can improve surface morphology, it reduces the indium incorporation significantly. Here, a novel usage of H 2 treatment on the GaN barrier before the InGaN quantum well is demonstrated to enhance indium incorporation efficiency and improve the IQE simultaneously for the first time. The mechanism behind it is systematically investigated and explained in detail. The possible reason for this phenomenon is the strain relieving function by the undulant GaN barrier surface after H 2 treatment. Test measurements show that applying 0.2 min H 2 treatment on the barrier would reduce defects and enhance indium incorporation, which would improve the localization effect and finally lead to a higher IQE. Although further increasing the treatment time to 0.4 min incorporates more indium atoms, the IQE decreases at the expense of more defects and a larger polarization field than the 0.2 min sample. (paper)

  15. The Isinglass Auroral Sounding Rocket Campaign: data synthesis incorporating remote sensing, in situ observations, and modelling

    Science.gov (United States)

    Lynch, K. A.; Clayton, R.; Roberts, T. M.; Hampton, D. L.; Conde, M.; Zettergren, M. D.; Burleigh, M.; Samara, M.; Michell, R.; Grubbs, G. A., II; Lessard, M.; Hysell, D. L.; Varney, R. H.; Reimer, A.

    2017-12-01

    The NASA auroral sounding rocket mission Isinglass was launched from Poker Flat Alaska in winter 2017. This mission consists of two separate multi-payload sounding rockets, over an array of groundbased observations, including radars and filtered cameras. The science goal is to collect two case studies, in two different auroral events, of the gradient scale sizes of auroral disturbances in the ionosphere. Data from the in situ payloads and the groundbased observations will be synthesized and fed into an ionospheric model, and the results will be studied to learn about which scale sizes of ionospheric structuring have significance for magnetosphere-ionosphere auroral coupling. The in situ instrumentation includes thermal ion sensors (at 5 points on the second flight), thermal electron sensors (at 2 points), DC magnetic fields (2 point), DC electric fields (one point, plus the 4 low-resource thermal ion RPA observations of drift on the second flight), and an auroral precipitation sensor (one point). The groundbased array includes filtered auroral imagers, the PFISR and SuperDarn radars, a coherent scatter radar, and a Fabry-Perot interferometer array. The ionospheric model to be used is a 3d electrostatic model including the effects of ionospheric chemistry. One observational and modelling goal for the mission is to move both observations and models of auroral arc systems into the third (along-arc) dimension. Modern assimilative tools combined with multipoint but low-resource observations allow a new view of the auroral ionosphere, that should allow us to learn more about the auroral zone as a coupled system. Conjugate case studies such as the Isinglass rocket flights allow for a test of the models' intepretation by comparing to in situ data. We aim to develop and improve ionospheric models to the point where they can be used to interpret remote sensing data with confidence without the checkpoint of in situ comparison.

  16. Exclusive description of multiple production on nuclei in the additive quark model. Multiplicity distributions in interactions with heavy nuclei

    International Nuclear Information System (INIS)

    Levchenko, B.B.; Nikolaev, N.N.

    1985-01-01

    In the framework of the additive quark model of multiple production on nuclei we calculate the multiplicity distributions of secondary particles and the correlations between secondary particles in πA and pA interactions with heavy nuclei. We show that intranuclear cascades are responsible for up to 50% of the nuclear increase of the multiplicity of fast particles. We analyze the sensitivity of the multiplicities and their correlations to the choice of the quark-hadronization function. We show that with good accuracy the yield of relativistic secondary particles from heavy and intermediate nuclei depends only on the number N/sub p/ of protons knocked out of the nucleus, and not on the mass number of the nucleus (N/sub p/ scaling)

  17. Comparative study between a QCD inspired model and a multiple diffraction model

    International Nuclear Information System (INIS)

    Luna, E.G.S.; Martini, A.F.; Menon, M.J.

    2003-01-01

    A comparative study between a QCD Inspired Model (QCDIM) and a Multiple Diffraction Model (MDM) is presented, with focus on the results for pp differential cross section at √s = 52.8 GeV. It is shown that the MDM predictions are in agreement with experimental data, except for the dip region and that the QCDIM describes only the diffraction peak region. Interpretations in terms of the corresponding eikonals are also discussed. (author)

  18. Incorporating harvest rates into the sex-age-kill model for white-tailed deer

    Science.gov (United States)

    Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.

    2013-01-01

    Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining

  19. Incorporation of lysosomal sequestration in the mechanistic model for prediction of tissue distribution of basic drugs.

    Science.gov (United States)

    Assmus, Frauke; Houston, J Brian; Galetin, Aleksandra

    2017-11-15

    The prediction of tissue-to-plasma water partition coefficients (Kpu) from in vitro and in silico data using the tissue-composition based model (Rodgers & Rowland, J Pharm Sci. 2005, 94(6):1237-48.) is well established. However, distribution of basic drugs, in particular into lysosome-rich lung tissue, tends to be under-predicted by this approach. The aim of this study was to develop an extended mechanistic model for the prediction of Kpu which accounts for lysosomal sequestration and the contribution of different cell types in the tissue of interest. The extended model is based on compound-specific physicochemical properties and tissue composition data to describe drug ionization, distribution into tissue water and drug binding to neutral lipids, neutral phospholipids and acidic phospholipids in tissues, including lysosomes. Physiological data on the types of cells contributing to lung, kidney and liver, their lysosomal content and lysosomal pH were collated from the literature. The predictive power of the extended mechanistic model was evaluated using a dataset of 28 basic drugs (pK a ≥7.8, 17 β-blockers, 11 structurally diverse drugs) for which experimentally determined Kpu data in rat tissue have been reported. Accounting for the lysosomal sequestration in the extended mechanistic model improved the accuracy of Kpu predictions in lung compared to the original Rodgers model (56% drugs within 2-fold or 88% within 3-fold of observed values). Reduction in the extent of Kpu under-prediction was also evident in liver and kidney. However, consideration of lysosomal sequestration increased the occurrence of over-predictions, yielding overall comparable model performances for kidney and liver, with 68% and 54% of Kpu values within 2-fold error, respectively. High lysosomal concentration ratios relative to cytosol (>1000-fold) were predicted for the drugs investigated; the extent differed depending on the lysosomal pH and concentration of acidic phospholipids among

  20. A kinematic wave model in Lagrangian coordinates incorporating capacity drop: Application to homogeneous road stretches and discontinuities

    Science.gov (United States)

    Yuan, Kai; Knoop, Victor L.; Hoogendoorn, Serge P.

    2017-01-01

    On freeways, congestion always leads to capacity drop. This means the queue discharge rate is lower than the pre-queue capacity. Our recent research findings indicate that the queue discharge rate increases with the speed in congestion, that is the capacity drop is strongly correlated with the congestion state. Incorporating this varying capacity drop into a kinematic wave model is essential for assessing consequences of control strategies. However, to the best of authors' knowledge, no such a model exists. This paper fills the research gap by presenting a Lagrangian kinematic wave model. "Lagrangian" denotes that the new model is solved in Lagrangian coordinates. The new model can give capacity drops accompanying both of stop-and-go waves (on homogeneous freeway section) and standing queues (at nodes) in a network. The new model can be applied in a network operation. In this Lagrangian kinematic wave model, the queue discharge rate (or the capacity drop) is a function of vehicular speed in traffic jams. Four case studies on links as well as at lane-drop and on-ramp nodes show that the Lagrangian kinematic wave model can give capacity drops well, consistent with empirical observations.

  1. Ultrasonically assisted drilling: A finite-element model incorporating acoustic softening effects

    International Nuclear Information System (INIS)

    Phadnis, V A; Roy, A; Silberschmidt, V V

    2013-01-01

    Ultrasonically assisted drilling (UAD) is a novel machining technique suitable for drilling in hard-to-machine quasi-brittle materials such as carbon fibre reinforced polymer composites (CFRP). UAD has been shown to possess several advantages compared to conventional drilling (CD), including reduced thrust forces, diminished burr formation at drill exit and an overall improvement in roundness and surface finish of the drilled hole. Recently, our in-house experiments of UAD in CFRP composites demonstrated remarkable reductions in thrust-force and torque measurements (average force reductions in excess of 80%) when compared to CD with the same machining parameters. In this study, a 3D finite-element model of drilling in CFRP is developed. In order to model acoustic (ultrasonic) softening effects, a phenomenological model, which accounts for ultrasonically induced plastic strain, was implemented in ABAQUS/Explicit. The model also accounts for dynamic frictional effects, which also contribute to the overall improved machining characteristics in UAD. The model is validated with experimental findings, where an excellent correlation between the reduced thrust force and torque magnitude was achieved

  2. High-resolution Continental Scale Land Surface Model incorporating Land-water Management in United States

    Science.gov (United States)

    Shin, S.; Pokhrel, Y. N.

    2016-12-01

    Land surface models have been used to assess water resources sustainability under changing Earth environment and increasing human water needs. Overwhelming observational records indicate that human activities have ubiquitous and pertinent effects on the hydrologic cycle; however, they have been crudely represented in large scale land surface models. In this study, we enhance an integrated continental-scale land hydrology model named Leaf-Hydro-Flood to better represent land-water management. The model is implemented at high resolution (5km grids) over the continental US. Surface water and groundwater are withdrawn based on actual practices. Newly added irrigation, water diversion, and dam operation schemes allow better simulations of stream flows, evapotranspiration, and infiltration. Results of various hydrologic fluxes and stores from two sets of simulation (one with and the other without human activities) are compared over a range of river basin and aquifer scales. The improved simulations of land hydrology have potential to build consistent modeling framework for human-water-climate interactions.

  3. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    Science.gov (United States)

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Development of a prototype mesoscale computer model incorporating treatment of topography

    International Nuclear Information System (INIS)

    Apsimon, H.; Kitson, K.; Fawcett, M.; Goddard, A.J.H.

    1984-01-01

    Models are available for simulating dispersal of accidental releases, using mass-consistent wind-fields and accounting for site-specific topography. These techniques were examined critically to see if they might be improved, and to assess their limitations. An improved model, windfield adjusted for topography (WAFT), was developed (with advantages over MATHEW used in the Atmospheric Release Advisory Capability - ARAC system). To simulate dispersion in the windfields produced by WAFT and calculate time integrated air concentrations and dry and wet deposition the TOMCATS model was developed. It treats the release as an assembly of pseudo-particles using Monte Carlo techniques to simulate turbulent displacements. It allows for larger eddy effects in the horizontal turbulence spectrum. Wet deposition is calculated using inhomogeneous rainfields evolving in time and space. The models were assessed, applying them to hypothetical releases in complex terrain, using typical data applicable in accident conditions, and undertaking sensitivity studies. One finds considerable uncertainty in results produced by these models. Although useful for post-facto analysis, such limitations cast doubt on their advantages, relative to simpler techniques, during an actual emergency

  5. Incorporating microbial dormancy dynamics into soil decomposition models to improve quantification of soil carbon dynamics of northern temperate forests

    Energy Technology Data Exchange (ETDEWEB)

    He, Yujie [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Yang, Jinyan [Univ. of Georgia, Athens, GA (United States). Warnell School of Forestry and Natural Resources; Northeast Forestry Univ., Harbin (China). Center for Ecological Research; Zhuang, Qianlai [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Purdue Univ., West Lafayette, IN (United States). Dept. of Agronomy; Harden, Jennifer W. [U.S. Geological Survey, Menlo Park, CA (United States); McGuire, Anthony D. [Alaska Cooperative Fish and Wildlife Research Unit, U.S. Geological Survey, Univ. of Alaska, Fairbanks, AK (United States). U.S. Geological Survey, Alaska Cooperative Fish and Wildlife Research Unit; Liu, Yaling [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Wang, Gangsheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Climate Change Science Inst. and Environmental Sciences Division; Gu, Lianhong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division

    2015-11-20

    Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here in this study we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 PgCyr-1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil RH with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.

  6. Incorporating microbial dormancy dynamics into soil decomposition models to improve quantification of soil carbon dynamics of northern temperate forests

    Science.gov (United States)

    He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, A. David; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong

    2015-01-01

    Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4–0.6) in the simulated spatial pattern of soil RHwith both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = −0.43 to −0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.

  7. Periglacial processes incorporated into a long-term landscape evolution model

    DEFF Research Database (Denmark)

    Andersen, Jane Lund; Egholm, D.L.; Knudsen, Mads Faurschou

    Little is known about the long-term influence of periglacial processes on landscape evolution in cold areas, even though the efficiency of frost cracking on the breakdown of rocks has been documented by observations and experiments. Cold-room laboratory experiments show that a continuous water...... supply and sustained sub- zero temperatures are essential to develop fractures in porous rocks (e.g. Murton, 2006), but the cracking efficiency for harder rock types under natural conditions is less clear. However, based on experimental results for porous rocks, Hales and Roering (2007) proposed a model...... by their model and the elevation of scree deposits in the Southern Alps, New Zealand. This result suggests a link between frost-cracking efficiency and long-term landscape evolution and thus merits further investigations. Anderson et al. (2012) expanded this early model by including the effects of latent heat...

  8. Modeling & Informatics at Vertex Pharmaceuticals Incorporated: our philosophy for sustained impact.

    Science.gov (United States)

    McGaughey, Georgia; Patrick Walters, W

    2017-03-01

    Molecular modelers and informaticians have the unique opportunity to integrate cross-functional data using a myriad of tools, methods and visuals to generate information. Using their drug discovery expertise, information is transformed to knowledge that impacts drug discovery. These insights are often times formulated locally and then applied more broadly, which influence the discovery of new medicines. This is particularly true in an organization where the members are exposed to projects throughout an organization, such as in the case of the global Modeling & Informatics group at Vertex Pharmaceuticals. From its inception, Vertex has been a leader in the development and use of computational methods for drug discovery. In this paper, we describe the Modeling & Informatics group at Vertex and the underlying philosophy, which has driven this team to sustain impact on the discovery of first-in-class transformative medicines.

  9. ISG hybrid powertrain: a rule-based driver model incorporating look-ahead information

    Science.gov (United States)

    Shen, Shuiwen; Zhang, Junzhi; Chen, Xiaojiang; Zhong, Qing-Chang; Thornton, Roger

    2010-03-01

    According to European regulations, if the amount of regenerative braking is determined by the travel of the brake pedal, more stringent standards must be applied, otherwise it may adversely affect the existing vehicle safety system. The use of engine or vehicle speed to derive regenerative braking is one way to avoid strict design standards, but this introduces discontinuity in powertrain torque when the driver releases the acceleration pedal or applies the brake pedal. This is shown to cause oscillations in the pedal input and powertrain torque when a conventional driver model is adopted. Look-ahead information, together with other predicted vehicle states, are adopted to control the vehicle speed, in particular, during deceleration, and to improve the driver model so that oscillations can be avoided. The improved driver model makes analysis and validation of the control strategy for an integrated starter generator (ISG) hybrid powertrain possible.

  10. The design of a wind tunnel VSTOL fighter model incorporating turbine powered engine simulators

    Science.gov (United States)

    Bailey, R. O.; Maraz, M. R.; Hiley, P. E.

    1981-01-01

    A wind-tunnel model of a supersonic VSTOL fighter aircraft configuration has been developed for use in the evaluation of airframe-propulsion system aerodynamic interactions. The model may be employed with conventional test techniques, where configuration aerodynamics are measured in a flow-through mode and incremental nozzle-airframe interactions are measured in a jet-effects mode, and with the Compact Multimission Aircraft Propulsion Simulator which is capable of the simultaneous simulation of inlet and exhaust nozzle flow fields so as to allow the evaluation of the extent of inlet and nozzle flow field coupling. The basic configuration of the twin-engine model has a geometrically close-coupled canard and wing, and a moderately short nacelle with nonaxisymmetric vectorable exhaust nozzles near the wing trailing edge, and may be converted to a canardless configuration with an extremely short nacelle. Testing is planned to begin in the summer of 1982.

  11. Incorporating photon recycling into the analytical drift-diffusion model of high efficiency solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Lumb, Matthew P. [The George Washington University, 2121 I Street NW, Washington, DC 20037 (United States); Naval Research Laboratory, Washington, DC 20375 (United States); Steiner, Myles A.; Geisz, John F. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Walters, Robert J. [Naval Research Laboratory, Washington, DC 20375 (United States)

    2014-11-21

    The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close to the fundamental efficiency limit.

  12. Linking landscape characteristics to local grizzly bear abundance using multiple detection methods in a hierarchical model

    Science.gov (United States)

    Graves, T.A.; Kendall, Katherine C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.

    2011-01-01

    Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system.

  13. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    Science.gov (United States)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  14. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons

    International Nuclear Information System (INIS)

    Peterson, S W; Polf, J; Archambault, L; Beddar, S; Bues, M; Ciangaru, G; Smith, A

    2009-01-01

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.

  15. Incorporating implementation overheads in the analysis for the flexible spin-lock model

    NARCIS (Netherlands)

    Balasubramanian, S.M.N.; Afshar, S.; Gai, P.; Behnam, M.; Bril, R.J.

    2017-01-01

    The flexible spin-lock model (FSLM) unifies suspension-based and spin-based resource sharing protocols for partitioned fixed-priority preemptive scheduling based real-time multiprocessor platforms. Recent work has been done in defining the protocol for FSLM and providing a schedulability analysis

  16. Incorporating Parameter Uncertainty in Bayesian Segmentation Models: Application to Hippocampal Subfield Volumetry

    DEFF Research Database (Denmark)

    Iglesias, J. E.; Sabuncu, M. R.; Van Leemput, Koen

    2012-01-01

    Many successful segmentation algorithms are based on Bayesian models in which prior anatomical knowledge is combined with the available image information. However, these methods typically have many free parameters that are estimated to obtain point estimates only, whereas a faithful Bayesian anal...

  17. Incorporation of leaf nitrogen observations for biochemical and environmental modeling of photosynthesis and evapotranspiration

    DEFF Research Database (Denmark)

    Bøgh, E.; Gjettermann, Birgitte; Abrahamsen, Per

    2007-01-01

    . While most canopy photosynthesis models assume an exponential vertical profile of leaf N contents in the canopy, the field measurements showed that well-fertilized fields may have a uniform or exponential profile, and senescent canopies have reduced levels of N contents in upper leaves. The sensitivity...

  18. Improved Path Loss Simulation Incorporating Three-Dimensional Terrain Model Using Parallel Coprocessors

    Directory of Open Access Journals (Sweden)

    Zhang Bin Loo

    2017-01-01

    Full Text Available Current network simulators abstract out wireless propagation models due to the high computation requirements for realistic modeling. As such, there is still a large gap between the results obtained from simulators and real world scenario. In this paper, we present a framework for improved path loss simulation built on top of an existing network simulation software, NS-3. Different from the conventional disk model, the proposed simulation also considers the diffraction loss computed using Epstein and Peterson’s model through the use of actual terrain elevation data to give an accurate estimate of path loss between a transmitter and a receiver. The drawback of high computation requirements is relaxed by offloading the computationally intensive components onto an inexpensive off-the-shelf parallel coprocessor, which is a NVIDIA GPU. Experiments are performed using actual terrain elevation data provided from United States Geological Survey. As compared to the conventional CPU architecture, the experimental result shows that a speedup of 20x to 42x is achieved by exploiting the parallel processing of GPU to compute the path loss between two nodes using terrain elevation data. The result shows that the path losses between two nodes are greatly affected by the terrain profile between these two nodes. Besides this, the result also suggests that the common strategy to place the transmitter in the highest position may not always work.

  19. A quantitative model of the cardiac ventricular cell incorporating the transverse-axial tubular system

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Christé, G.; Šimurda, J.

    2003-01-01

    Roč. 22, č. 3 (2003), s. 355-368 ISSN 0231-5882 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * tubular system * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 0.794, year: 2003

  20. Development of a prototype mesoscale computer model incorporating treatment of topography

    International Nuclear Information System (INIS)

    Apsimon, H.M.; Goddard, A.J.H.; Kitson, K.; Fawcett, M.

    1985-01-01

    More sophisticated models are now available to simulate dispersal of accidental radioactive releases to the atmosphere; these use mass-consistent windfields and attempt allowance for site-specific topographical features. Our aim has been to examine these techniques critically, develop where possible, and assess limitations and accuracy. The resulting windfield model WAFT uses efficient numerical techniques with improved orographic resolution and treatment of meteorological conditions. Time integrated air concentrations, dry and wet deposition are derived from TOMCATS, which applies Monte-Carlo techniques to an assembly of pseudo-particles representing the release, with specific attention to the role of large eddies and evolving inhomogeneous rainfields. These models have been assessed by application to hypothetical releases in complex terrain using data which would have been available in the event of an accident, and undertaking sensitivity studies. It is concluded that there is considerable uncertainty in results produced by such models; although they may be useful in post-facto analysis, such limitations cast doubt on their advantages relative to simpler techniques, with more modest requirements, during an actual emergency. (author)

  1. Teaching for Art Criticism: Incorporating Feldman's Critical Analysis Learning Model in Students' Studio Practice

    Science.gov (United States)

    Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib

    2016-01-01

    This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…

  2. Incorporating stakeholder perspectives into model-based scenarios : Exploring the futures of the Dutch gas sector

    NARCIS (Netherlands)

    Eker, S.; van Daalen, C.; Thissen, W.A.H.

    2017-01-01

    Several model-based, analytical approaches have been developed recently to deal with the deep uncertainty present in situations for which futures studies are conducted. These approaches focus on covering a wide variety of scenarios and searching for robust strategies. However, they generally do

  3. A Probabilistic Model of Visual Working Memory: Incorporating Higher Order Regularities into Working Memory Capacity Estimates

    Science.gov (United States)

    Brady, Timothy F.; Tenenbaum, Joshua B.

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…

  4. Process model for ammonia volatilization from anaerobic swine lagoons incorporating varying wind speeds and biogas bubbling

    Science.gov (United States)

    Ammonia volatilization from treatment lagoons varies widely with the total ammonia concentration, pH, temperature, suspended solids, atmospheric ammonia concentration above the water surface, and wind speed. Ammonia emissions were estimated with a process-based mechanistic model integrating ammonia ...

  5. Development of a mission-based funding model for undergraduate medical education: incorporation of quality.

    Science.gov (United States)

    Stagnaro-Green, Alex; Roe, David; Soto-Greene, Maria; Joffe, Russell

    2008-01-01

    Increasing financial pressures, along with a desire to realign resources with institutional priorities, has resulted in the adoption of mission-based funding (MBF) at many medical schools. The lack of inclusion of quality and the time and expense in developing and implementing mission based funding are major deficiencies in the models reported to date. In academic year 2002-2003 New Jersey Medical School developed a model that included both quantity and quality in the education metric and that was departmentally based. Eighty percent of the undergraduate medical education allocation was based on the quantity of undergraduate medical education taught by the department ($7.35 million), and 20% ($1.89 million) was allocated based on the quality of the education delivered. Quality determinations were made by the educational leadership based on student evaluations and departmental compliance with educational administrative requirements. Evolution of the model has included the development of a faculty oversight committee and the integration of peer evaluation in the determination of educational quality. Six departments had a documented increase in quality over time, and one department had a transient decrease in quality. The MBF model has been well accepted by chairs, educational leaders, and faculty and has been instrumental in enhancing the stature of education at our institution.

  6. D1.4 -- Short Report on Models That Incorporate Non-stationary Time Variant Effects

    DEFF Research Database (Denmark)

    Lostanlen, Yves; Pedersen, Troels; Steinboeck, Gerhard

    -to-indoor environments are presented. Furthermore, the impact of human activity on the time variation of the radio channel is investigated and first simulation results are presented. Movement models, which include realistic interaction between nodes, are part of current research activities....

  7. Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.

    Science.gov (United States)

    Buchal, Ralph O.

    2001-01-01

    Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)

  8. Incorporating unreliability of transit in transport demand models: theoretical and practical approach

    NARCIS (Netherlands)

    van Oort, N.; Brands, Ties; de Romph, E.; Aceves Flores, J.

    2014-01-01

    Nowadays, transport demand models do not explicitly evaluate the impacts of service reliability of transit. Service reliability of transit systems is adversely experienced by users, as it causes additional travel time and unsecure arrival times. Because of this, travelers are likely to perceive a

  9. Incorporating Lightning Flash Data into the WRF-CMAQ Modeling System: Algorithms and Evaluations

    Science.gov (United States)

    We describe the use of lightning flash data from the National Lightning Detection Network (NLDN) to constrain and improve the performance of coupled meteorology-chemistry models. We recently implemented a scheme in which lightning data is used to control the triggering of conve...

  10. Incorporating Logistics in Freight Transport Demand Models: State-of-the-Art and Research Opportunities

    NARCIS (Netherlands)

    Tavasszy, L.A.; Ruijgrok, K.; Davydenko, I.

    2012-01-01

    Freight transport demand is a demand derived from all the activities needed to move goods between locations of production to locations of consumption, including trade, logistics and transportation. A good representation of logistics in freight transport demand models allows us to predict the effects

  11. Incorporating prior information into differential network analysis using non-paranormal graphical models.

    Science.gov (United States)

    Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong

    2017-08-15

    Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Decision model incorporating utility theory and measurement of social values applied to nuclear waste management

    International Nuclear Information System (INIS)

    Litchfield, J.W.; Hansen, J.V.; Beck, L.C.

    1975-07-01

    A generalized computer-based decision analysis model was developed and tested. Several alternative concepts for ultimate disposal have already been developed; however, significant research is still required before any of these can be implemented. To make a choice based on technical estimates of the costs, short-term safety, long-term safety, and accident detection and recovery requires estimating the relative importance of each of these factors or attributes. These relative importance estimates primarily involve social values and therefore vary from one individual to the next. The approach used was to sample various public groups to determine the relative importance of each of the factors to the public. These estimates of importance weights were combined in a decision analysis model with estimates, furnished by technical experts, of the degree to which each alternative concept achieves each of the criteria. This model then integrates the two separate and unique sources of information and provides the decision maker with information as to the preferences and concerns of the public as well as the technical areas within each concept which need further research. The model can rank the alternatives using sampled public opinion and techno-economic data. This model provides a decision maker with a structured approach to subdividing complex alternatives into a set of more easily considered attributes, measuring the technical performance of each alternative relative to each attribute, estimating relevant social values, and assimilating quantitative information in a rational manner to estimate total value for each alternative. Because of the explicit nature of this decision analysis, the decision maker can select a specific alternative supported by clear documentation and justification for his assumptions and estimates. (U.S.)

  13. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    Science.gov (United States)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  14. Incorporating temporal variation in seabird telemetry data: time variant kernel density models

    Science.gov (United States)

    Gilbert, Andrew; Adams, Evan M.; Anderson, Carl; Berlin, Alicia; Bowman, Timothy D.; Connelly, Emily; Gilliland, Scott; Gray, Carrie E.; Lepage, Christine; Meattey, Dustin; Montevecchi, William; Osenkowski, Jason; Savoy, Lucas; Stenhouse, Iain; Williams, Kathryn

    2015-01-01

    A key component of the Mid-Atlantic Baseline Studies project was tracking the individual movements of focal marine bird species (Red-throated Loon [Gavia stellata], Northern Gannet [Morus bassanus], and Surf Scoter [Melanitta perspicillata]) through the use of satellite telemetry. This element of the project was a collaborative effort with the Department of Energy (DOE), Bureau of Ocean Energy Management (BOEM), the U.S. Fish and Wildlife Service (USFWS), and Sea Duck Joint Venture (SDJV), among other organizations. Satellite telemetry is an effective and informative tool for understanding individual animal movement patterns, allowing researchers to mark an individual once, and thereafter follow the movements of the animal in space and time. Aggregating telemetry data from multiple individuals can provide information about the spatial use and temporal movements of populations. Tracking data is three dimensional, with the first two dimensions, X and Y, ordered along the third dimension, time. GIS software has many capabilities to store, analyze and visualize the location information, but little or no support for visualizing the temporal data, and tools for processing temporal data are lacking. We explored several ways of analyzing the movement patterns using the spatiotemporal data provided by satellite tags. Here, we present the results of one promising method: time-variant kernel density analysis (Keating and Cherry, 2009). The goal of this chapter is to demonstrate new methods in spatial analysis to visualize and interpret tracking data for a large number of individual birds across time in the mid-Atlantic study area and beyond. In this chapter, we placed greater emphasis on analytical methods than on the behavior and ecology of the animals tracked. For more detailed examinations of the ecology and wintering habitat use of the focal species in the midAtlantic, see Chapters 20-22.

  15. Incorporating Ecosystem Processes Controlling Carbon Balance Into Models of Coupled Human-Natural Systems

    Science.gov (United States)

    Currie, W.; Brown, D. G.; Brunner, A.; Fouladbash, L.; Hadzick, Z.; Hutchins, M.; Kiger, S. E.; Makino, Y.; Nassauer, J. I.; Robinson, D. T.; Riolo, R. L.; Sun, S.

    2012-12-01

    A key element in the study of coupled human-natural systems is the interactions of human populations with vegetation and soils. In human-dominated landscapes, vegetation production and change results from a combination of ecological processes and human decision-making and behavior. Vegetation is often dramatically altered, whether to produce food for humans and livestock, to harvest fiber for construction and other materials, to harvest fuel wood or feedstock for biofuels, or simply for cultural preferences as in the case of residential lawns with sparse trees in the exurban landscape. This alteration of vegetation and its management has a substantial impact on the landscape carbon balance. Models can be used to simulate scenarios in human-natural systems and to examine the integration of processes that determine future trajectories of carbon balance. However, most models of human-natural systems include little integration of the human alteration of vegetation with the ecosystem processes that regulate carbon balance. Here we illustrate a few case studies of pilot-study models that strive for this integration from our research across various types of landscapes. We focus greater detail on a fully developed research model linked to a field study of vegetation and soils in the exurban residential landscape of Southeastern Michigan, USA. The field study characterized vegetation and soil carbon storage in 5 types of ecological zones. Field-observed carbon storage in the vegetation in these zones ranged widely, from 150 g C/m2 in turfgrass zones, to 6,000 g C/m2 in zones defined as turfgrass with sparse woody vegetation, to 16,000 g C/m2 in a zone defined as dense trees and shrubs. Use of these zones facilitated the scaling of carbon pools to the landscape, where the areal mixtures of zone types had a significant impact on landscape C storage. Use of these zones also facilitated the use of the ecosystem process model Biome-BGC to simulate C trajectories and also

  16. Incorporation of aqueous reaction kinetics and biodegradation intoTOUGHREACT: Application of a multi-region model to hydrobiogeoChemicaltransport of denitrification and sulfate reduction

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tianfu

    2006-07-13

    The need to consider aqueous and sorption kinetics andmicrobiological processes arises in many subsurface problems. Ageneral-rate expression has been implemented into the TOUGHREACTsimulator, which considers multiple mechanisms (pathways) and includesmultiple product, Monod, and inhibition terms. This paper presents aformulation for incorporating kinetic rates among primary species intomass-balance equations. The space discretization used is based on aflexible integral finite difference approach that uses irregular griddingto model bio-geologic structures. A general multi-region model forhydrological transport interacted with microbiological and geochemicalprocesses is proposed. A 1-D reactive transport problem with kineticbiodegradation and sorption was used to test the enhanced simulator,which involves the processes that occur when a pulse of water containingNTA (nitrylotriacetate) and cobalt is injected into a column. The currentsimulation results agree very well with those obtained with othersimulators. The applicability of this general multi-region model wasvalidated by results from a published column experiment ofdenitrification and sulfate reduction. The matches with measured nitrateand sulfate concentrations were adjusted with the interficial areabetween mobile hydrological and immobile biological regions. Resultssuggest that TOUGHREACT can not only be a useful interpretative tool forbiogeochemical experiments, but also can produce insight into processesand parameters of microscopic diffusion and their interplay withbiogeochemical reactions. The geometric- and process-based multi-regionmodel may provide a framework for understanding field-scalehydrobiogeochemical heterogeneities and upscaling parameters.

  17. Representation and Incorporation of Close Others' Responses: The RICOR Model of Social Influence.

    Science.gov (United States)

    Smith, Eliot R; Mackie, Diane M

    2015-08-03

    We propose a new model of social influence, which can occur spontaneously and in the absence of typically assumed motives. We assume that perceivers routinely construct representations of other people's experiences and responses (beliefs, attitudes, emotions, and behaviors), when observing others' responses or simulating the responses of unobserved others. Like representations made accessible by priming, these representations may then influence the process that generates perceivers' own responses, without intention or awareness, especially when there is a strong social connection to the other. We describe evidence for the basic properties and important moderators of this process, which distinguish it from other mechanisms such as informational, normative, or social identity influence. The model offers new perspectives on the role of others' values in producing cultural differences, the persistence and power of stereotypes, the adaptive reasons for being influenced by others' responses, and the impact of others' views about the self. © 2015 by the Society for Personality and Social Psychology, Inc.

  18. Incorporating imperfect detection into joint models of communites: A response to Warton et al.

    Science.gov (United States)

    Beissinger, Steven R.; Iknayan, Kelly J.; Guillera-Arroita, Gurutzeta; Zipkin, Elise; Dorazio, Robert; Royle, Andy; Kery, Marc

    2016-01-01

    Warton et al. [1] advance community ecology by describing a statistical framework that can jointly model abundances (or distributions) across many taxa to quantify how community properties respond to environmental variables. This framework specifies the effects of both measured and unmeasured (latent) variables on the abundance (or occurrence) of each species. Latent variables are random effects that capture the effects of both missing environmental predictors and correlations in parameter values among different species. As presented in Warton et al., however, the joint modeling framework fails to account for the common problem of detection or measurement errors that always accompany field sampling of abundance or occupancy, and are well known to obscure species- and community-level inferences.

  19. Incorporating Floating Surface Objects into a Fully Dispersive Surface Wave Model

    Science.gov (United States)

    2016-04-19

    Bateman c , Joseph Calantoni c , James T. Kirby b a NRL Code 7320, 1009 Balch Blvd, Stennis Space Center, MS 39529 USA b Center for Applied Coastal...wave prop- agation. J. Waterway Port Coast. Ocean Eng. 119, 618–638 . rzech, M., Shi, F., Calantoni, J., Bateman , S., Veeramony, J., 2014. Small-scale...F., Bateman , S., Calantoni, J., 2016. Modeling small- scale physics of waves and ice in the MIZ. AGU 2016 Ocean Sciences Meeting, Session 9483

  20. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    OpenAIRE

    Maithreyi Subramaniam; Jaffri Hanafi; Abu Talib Putih

    2016-01-01

    This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The...

  1. Incorporating social anxiety into a model of college student problematic drinking

    OpenAIRE

    Ham, Lindsay S.; Hope, Debra A.

    2005-01-01

    College problem drinking and social anxiety are significant public health concerns with highly negative consequences. College students are faced with a variety of novel social situations and situations encouraging alcohol consumption. The current study involved developing a path model of college problem drinking, including social anxiety, in 316 college students referred to an alcohol intervention due to a campus alcohol violation. Contrary to hypotheses, social anxiety generally had an inver...

  2. Incorporating driver distraction in car-following models: Applying the TCI to the IDM

    OpenAIRE

    Hoogendoorn, R.G.; van Arem, B.; Hoogendoorn, S.P.

    2013-01-01

    ITS can play a significant role in the improvement of traffic flow, traffic safety and greenhouse gas emissions. However, the implementation of Advanced Driver Assistance Systems may lead to adaptation effects in longitudinal driving behavior following driver distraction. It was however not yet clear how to model these adaptation effects in driving behavior mathematically and on which theoretical framework this should be grounded. To this end in this contribution we introduce a theoretical fr...

  3. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  4. A Compliant Bistable Mechanism Design Incorporating Elastica Buckling Beam Theory and Pseudo-Rigid-Body Model

    DEFF Research Database (Denmark)

    Sönmez, Ümit; Tutum, Cem Celal

    2008-01-01

    In this work, a new compliant bistable mechanism design is introduced. The combined use of pseudo-rigid-body model (PRBM) and the Elastica buckling theory is presented for the first time to analyze the new design. This mechanism consists of the large deflecting straight beams, buckling beams...... and the buckling Elastica solution for an original compliant mechanism kinematic analysis. New compliant mechanism designs are presented to highlight where such combined kinematic analysis is required....

  5. Terrestrial Feedbacks Incorporated in Global Vegetation Models through Observed Trait-Environment Responses

    Science.gov (United States)

    Bodegom, P. V.

    2015-12-01

    Most global vegetation models used to evaluate climate change impacts rely on plant functional types to describe vegetation responses to environmental stresses. In a traditional set-up in which vegetation characteristics are considered constant within a vegetation type, the possibility to implement and infer feedback mechanisms are limited as feedback mechanisms will likely involve a changing expression of community trait values. Based on community assembly concepts, we implemented functional trait-environment relationships into a global dynamic vegetation model to quantitatively assess this feature. For the current climate, a different global vegetation distribution was calculated with and without the inclusion of trait variation, emphasizing the importance of feedbacks -in interaction with competitive processes- for the prevailing global patterns. These trait-environmental responses do, however, not necessarily imply adaptive responses of vegetation to changing conditions and may locally lead to a faster turnover in vegetation upon climate change. Indeed, when running climate projections, simulations with trait variation did not yield a more stable or resilient vegetation than those without. Through the different feedback expressions, global and regional carbon and water fluxes were -however- strongly altered. At a global scale, model projections suggest an increased productivity and hence an increased carbon sink in the next decades to come, when including trait variation. However, by the end of the century, a reduced carbon sink is projected. This effect is due to a downregulation of photosynthesis rates, particularly in the tropical regions, even when accounting for CO2-fertilization effects. Altogether, the various global model simulations suggest the critical importance of including vegetation functional responses to changing environmental conditions to grasp terrestrial feedback mechanisms at global scales in the light of climate change.

  6. Numerical modelling of multiple scattering between two elastical particles

    DEFF Research Database (Denmark)

    Bjørnø, Irina; Jensen, Leif Bjørnø

    1998-01-01

    in suspension have been studied extensively since Foldy's formulation of his theory for isotropic scattering by randomly distributed scatterers. However, a number of important problems related to multiple scattering are still far from finding their solutions. A particular, but still unsolved, problem......Multiple acoustical signal interactions with sediment particles in the vicinity of the seabed may significantly change the course of sediment concentration profiles determined by inversion from acoustical backscattering measurements. The scattering properties of high concentrations of sediments...... is the question of proximity thresholds for influence of multiple scattering in terms of particle properties like volume fraction, average distance between particles or other related parameters. A few available experimental data indicate a significance of multiple scattering in suspensions where the concentration...

  7. 231 Using Multiple Regression Analysis in Modelling the Role of ...

    African Journals Online (AJOL)

    User

    of Internal Revenue, Tourism Bureau and hotel records. The multiple regression .... additional guest facilities such as restaurant, a swimming pool or child care and social function ... and provide good quality service to the public. Conclusion.

  8. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    International Nuclear Information System (INIS)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs

  9. Incorporating Neighborhood Choice in a Model of Neighborhood Effects on Income.

    Science.gov (United States)

    van Ham, Maarten; Boschman, Sanne; Vogel, Matt

    2018-05-09

    Studies of neighborhood effects often attempt to identify causal effects of neighborhood characteristics on individual outcomes, such as income, education, employment, and health. However, selection looms large in this line of research, and it has been argued that estimates of neighborhood effects are biased because people nonrandomly select into neighborhoods based on their preferences, income, and the availability of alternative housing. We propose a two-step framework to disentangle selection processes in the relationship between neighborhood deprivation and earnings. We model neighborhood selection using a conditional logit model, from which we derive correction terms. Driven by the recognition that most households prefer certain types of neighborhoods rather than specific areas, we employ a principle components analysis to reduce these terms into eight correction components. We use these to adjust parameter estimates from a model of subsequent neighborhood effects on individual income for the unequal probability that a household chooses to live in a particular type of neighborhood. We apply this technique to administrative data from the Netherlands. After we adjust for the differential sorting of households into certain types of neighborhoods, the effect of neighborhood income on individual income diminishes but remains significant. These results further emphasize that researchers need to be attuned to the role of selection bias when assessing the role of neighborhood effects on individual outcomes. Perhaps more importantly, the persistent effect of neighborhood deprivation on subsequent earnings suggests that neighborhood effects reflect more than the shared characteristics of neighborhood residents: place of residence partially determines economic well-being.

  10. Incorporating institutions and collective action into a sociohydrological model of flood resilience

    Science.gov (United States)

    Yu, David J.; Sangwan, Nikhil; Sung, Kyungmin; Chen, Xi; Merwade, Venkatesh

    2017-02-01

    Stylized sociohydrological models have mainly used social memory aspects such as community awareness or sensitivity to connect hydrologic change and social response. However, social memory alone does not satisfactorily capture the details of how human behavior is translated into collective action for water resources governance. Nor is it the only social mechanism by which the two-way feedbacks of sociohydrology can be operationalized. This study contributes toward bridging of this gap by developing a sociohydrological model of a flood resilience that includes two additional components: (1) institutions for collective action, and (2) connections to an external economic system. Motivated by the case of community-managed flood protection systems (polders) in coastal Bangladesh, we use the model to understand critical general features that affect long-term resilience of human-flood systems. Our findings suggest that occasional adversity can enhance long-term resilience. Allowing some hydrological variability to enter into the polder can increase its adaptive capacity for resilience through the preservation of social norm for collective action. Further, there are potential trade-offs associated with optimization of flood resistance through structural measures. By reducing sensitivity to floods, the system may become more fragile under the double impact of floods and economic change.

  11. Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.

    Science.gov (United States)

    Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M

    2018-05-22

    The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Absorbed dose evaluation based on a computational voxel model incorporating distinct cerebral structures

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Samia de Freitas; Trindade, Bruno; Campos, Tarcisio P.R. [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)]. E-mail: samiabrandao@gmail.com; bmtrindade@yahoo.com; campos@nuclear.ufmg.br

    2007-07-01

    Brain tumors are quite difficult to treat due to the collateral radiation damages produced on the patients. Despite of the improvements in the therapeutics protocols for this kind of tumor, involving surgery and radiotherapy, the failure rate is still extremely high. This fact occurs because tumors can not often be totally removed by surgery since it may produce some type of deficit in the cerebral functions. Radiotherapy is applied after the surgery, and both are palliative treatments. During radiotherapy the brain does not absorb the radiation dose in homogeneous way, because the various density and chemical composition of tissues involved. With the intention of evaluating better the harmful effects caused by radiotherapy it was developed an elaborated cerebral voxel model to be used in computational simulation of the irradiation protocols of brain tumors. This paper presents some structures function of the central nervous system and a detailed cerebral voxel model, created in the SISCODES program, considering meninges, cortex, gray matter, white matter, corpus callosum, limbic system, ventricles, hypophysis, cerebellum, brain stem and spinal cord. The irradiation protocol simulation was running in the MCNP5 code. The model was irradiated with photons beam whose spectrum simulates a linear accelerator of 6 MV. The dosimetric results were exported to SISCODES, which generated the isodose curves for the protocol. The percentage isodose curves in the brain are present in this paper. (author)

  13. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs.

  14. Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model

    Science.gov (United States)

    Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi

    2018-02-01

    Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

  15. The water balance of the urban Salt Lake Valley: a multiple-box model validated by observations

    Science.gov (United States)

    Stwertka, C.; Strong, C.

    2012-12-01

    A main focus of the recently awarded National Science Foundation (NSF) EPSCoR Track-1 research project "innovative Urban Transitions and Arid-region Hydro-sustainability (iUTAH)" is to quantify the primary components of the water balance for the Wasatch region, and to evaluate their sensitivity to climate change and projected urban development. Building on the multiple-box model that we developed and validated for carbon dioxide (Strong et al 2011), mass balance equations for water in the atmosphere and surface are incorporated into the modeling framework. The model is used to determine how surface fluxes, ground-water transport, biological fluxes, and meteorological processes regulate water cycling within and around the urban Salt Lake Valley. The model is used to evaluate the hypotheses that increased water demand associated with urban growth in Salt Lake Valley will (1) elevate sensitivity to projected climate variability and (2) motivate more attentive management of urban water use and evaporative fluxes.

  16. Biotransformation model of neutral and weakly polar organic compounds in fish incorporating internal partitioning.

    Science.gov (United States)

    Kuo, Dave T F; Di Toro, Dominic M

    2013-08-01

    A model for whole-body in vivo biotransformation of neutral and weakly polar organic chemicals in fish is presented. It considers internal chemical partitioning and uses Abraham solvation parameters as reactivity descriptors. It assumes that only chemicals freely dissolved in the body fluid may bind with enzymes and subsequently undergo biotransformation reactions. Consequently, the whole-body biotransformation rate of a chemical is retarded by the extent of its distribution in different biological compartments. Using a randomly generated training set (n = 64), the biotransformation model is found to be: log (HLφfish ) = 2.2 (±0.3)B - 2.1 (±0.2)V - 0.6 (±0.3) (root mean square error of prediction [RMSE] = 0.71), where HL is the whole-body biotransformation half-life in days, φfish is the freely dissolved fraction in body fluid, and B and V are the chemical's H-bond acceptance capacity and molecular volume. Abraham-type linear free energy equations were also developed for lipid-water (Klipidw ) and protein-water (Kprotw ) partition coefficients needed for the computation of φfish from independent determinations. These were found to be 1) log Klipidw  = 0.77E - 1.10S - 0.47A - 3.52B + 3.37V + 0.84 (in Lwat /kglipid ; n = 248, RMSE = 0.57) and 2) log Kprotw  = 0.74E - 0.37S - 0.13A - 1.37B + 1.06V - 0.88 (in Lwat /kgprot ; n = 69, RMSE = 0.38), where E, S, and A quantify dispersive/polarization, dipolar, and H-bond-donating interactions, respectively. The biotransformation model performs well in the validation of HL (n = 424, RMSE = 0.71). The predicted rate constants do not exceed the transport limit due to circulatory flow. Furthermore, the model adequately captures variation in biotransformation rate between chemicals with varying log octanol-water partitioning coefficient, B, and V and exhibits high degree of independence from the choice of training chemicals. The

  17. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    International Nuclear Information System (INIS)

    Yuan, Lulin; Wu, Q. Jackie; Yin, Fang-Fang; Yoo, David; Jiang, Yuliang; Ge, Yaorong

    2014-01-01

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trained with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined

  18. A Refined Model for the Structure of Acireductone Dioxygenase from Klebsiella ATCC 8724 Incorporating Residual Dipolar Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Pochapsky, Thomas C., E-mail: pochapsk@brandeis.edu; Pochapsky, Susan S.; Ju Tingting [Brandeis University, Department of Chemistry (United States); Hoefler, Chris [Brandeis University, Department of Biochemistry (United States); Liang Jue [Brandeis University, Department of Chemistry (United States)

    2006-02-15

    Acireductone dioxygenase (ARD) from Klebsiella ATCC 8724 is a metalloenzyme that is capable of catalyzing different reactions with the same substrates (acireductone and O{sub 2}) depending upon the metal bound in the active site. A model for the solution structure of the paramagnetic Ni{sup 2+}-containing ARD has been refined using residual dipolar couplings (RDCs) measured in two media. Additional dihedral restraints based on chemical shift (TALOS) were included in the refinement, and backbone structure in the vicinity of the active site was modeled from a crystallographic structure of the mouse homolog of ARD. The incorporation of residual dipolar couplings into the structural refinement alters the relative orientations of several structural features significantly, and improves local secondary structure determination. Comparisons between the solution structures obtained with and without RDCs are made, and structural similarities and differences between mouse and bacterial enzymes are described. Finally, the biological significance of these differences is considered.

  19. Incorporation of a high-roughness lower boundary into a mesoscale model for studies of dry deposition over complex terrain

    Science.gov (United States)

    Physick, W. L.; Garratt, J. R.

    1995-04-01

    For flow over natural surfaces, there exists a roughness sublayer within the atmospheric surface layer near the boundary. In this sublayer (typically 50 z 0 deep in unstable conditions), the Monin-Obukhov (M-O) flux profile relations for homogeneous surfaces cannot be applied. We have incorporated a modified form of the M-O stability functions (Garratt, 1978, 1980, 1983) in a mesoscale model to take account of this roughness sublayer and examined the diurnal variation of the boundary-layer wind and temperature profiles with and without these modifications. We have also investigated the effect of the modified M-O functions on the aerodynamic and laminar-sublayer resistances associated with the transfer of trace gases to vegetation. Our results show that when an observation height or the lowest level in a model is within the roughness sublayer, neglect of the flux-profile modifications leads to an underestimate of resistances by 7% at the most.

  20. The Eatwell Guide: Modelling the Health Implications of Incorporating New Sugar and Fibre Guidelines.

    Directory of Open Access Journals (Sweden)

    Linda J Cobiac

    Full Text Available To model population health impacts of dietary changes associated with the redevelopment of the UK food-based dietary guidelines (the 'Eatwell Guide'.Using multi-state lifetable methods, we modelled the impact of dietary changes on cardiovascular disease, diabetes and cancers over the lifetime of the current UK population. From this model, we determined change in life expectancy and disability-adjusted life years (DALYs that could be averted.Changing the average diet to that recommended in the new Eatwell Guide, without increasing total energy intake, could increase average life expectancy by 5.4 months (95% uncertainty interval: 4.7 to 6.2 for men and 4.0 months (3.4 to 4.6 for women; and avert 17.9 million (17.6 to 18.2 DALYs over the lifetime of the current population. A large proportion of the health benefits are from prevention of type 2 diabetes, with 440,000 (400,000 to 480,000 new cases prevented in men and 340,000 (310,000 to 370,000 new cases prevented in women, over the next ten years. Prevention of cardiovascular diseases and colorectal cancer is also large. However, if the diet recommended in the new Eatwell Guide is achieved with an accompanying increase in energy intake (and thus an increase in body mass index, around half the potential improvements in population health will not be realised.The dietary changes required to meet recommendations in the Eatwell Guide, which include eating more fruits and vegetables and less red and processed meats and dairy products, are large. However, the potential population health benefits are substantial.

  1. A porcine model of bladder outlet obstruction incorporating radio-telemetered cystometry.

    Science.gov (United States)

    Shaw, Matthew B; Herndon, Claude D; Cain, Mark P; Rink, Richard C; Kaefer, Martin

    2007-07-01

    To present a novel porcine model of bladder outlet obstruction (BOO) with a standardized bladder outlet resistance and real-time ambulatory radio-telemetered cystometry, as BOO is a common condition with many causes in both adults and children, with significant morbidity and occasional mortality, but attempts to model this condition in many animal models have the fundamental problem of standardising the degree of outlet resistance. BOO was created in nine castrated male pigs by dividing the mid-urethra; outflow was allowed through an implanted bladder drainage catheter containing a resistance valve, allowing urine to flow across the valve only when a set pressure differential was generated across the valve. An implantable radio-telemetered pressure sensor monitored the pressure within the bladder and abdominal cavity, and relayed this information to a remote computer. Four control pigs had an occluded bladder drainage catheter and pressure sensor placed, but were allowed to void normally through the native urethra. Intra-vesical pressure was monitored by telemetry, while the resistance valve was increased weekly, beginning with 2 cmH2O and ultimately reaching 10 cmH2O. The pigs were assessed using conventional cystometry under anaesthesia before death, and samples conserved in formalin for haematoxylin and eosin staining. The pigs had radio-telemetered cystometry for a median of 26 days. All telemetry implants functioned well for the duration of the experiment, but one pig developed a urethral fistula and was excluded from the study. With BOO the bladder mass index (bladder mass/body mass x 10 000) increased from 9.7 to 20 (P = 0.004), with a significant degree of hypertrophy of the detrusor smooth muscle bundles. Obstructed bladders were significantly less compliant than control bladders (8.3 vs 22.1 mL/cmH2O, P = 0.03). Telemetric cystometry showed that there was no statistically significance difference in mean bladder pressure between obstructed and control pigs

  2. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    Science.gov (United States)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  3. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    International Nuclear Information System (INIS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-01-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  4. An Additive-Multiplicative Cox-Aalen Regression Model

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; Cox regression; survival analysis; time-varying effects...

  5. Incorporation prior belief in the general path model: A comparison of information sources

    International Nuclear Information System (INIS)

    Coble, Jamie; Hines, Wesley

    2014-01-01

    The general path model (GPM) is one approach for performing degradation-based, or Type III, prognostics. The GPM fits a parametric function to the collected observations of a prognostic parameter and extrapolates the fit to a failure threshold. This approach has been successfully applied to a variety of systems when a sufficient number of prognostic parameter observations are available. However, the parametric fit can suffer significantly when few data are available or the data are very noisy. In these instances, it is beneficial to include additional information to influence the fit to conform to a prior belief about the evolution of system degradation. Bayesian statistical approaches have been proposed to include prior information in the form of distributions of expected model parameters. This requires a number of run-to-failure cases with tracked prognostic parameters; these data may not be readily available for many systems. Reliability information and stressor-based (Type I and Type II, respectively) prognostic estimates can provide the necessary prior belief for the GPM. This article presents the Bayesian updating framework to include prior information in the GPM and compares the efficacy of including different information sources on two data sets.

  6. Study of an intraurban travel demand model incorporating commuter preference variables

    Science.gov (United States)

    Holligan, P. E.; Coote, M. A.; Rushmer, C. R.; Fanning, M. L.

    1971-01-01

    The model is based on the substantial travel data base for the nine-county San Francisco Bay Area, provided by the Metropolitan Transportation Commission. The model is of the abstract type, and makes use of commuter attitudes towards modes and simple demographic characteristics of zones in a region to predict interzonal travel by mode for the region. A characterization of the STOL/VTOL mode was extrapolated by means of a subjective comparison of its expected characteristics with those of modes characterized by the survey. Predictions of STOL demand were made for the Bay Area and an aircraft network was developed to serve this demand. When this aircraft system is compared to the base case system, the demand for STOL service has increased five fold and the resulting economics show considerable benefit from the increased scale of operations. In the previous study all systems required subsidy in varying amounts. The new system shows a substantial profit at an average fare of $3.55 per trip.

  7. An expanded Notch-Delta model exhibiting long-range patterning and incorporating MicroRNA regulation.

    Directory of Open Access Journals (Sweden)

    Jerry S Chen

    2014-06-01

    Full Text Available Notch-Delta signaling is a fundamental cell-cell communication mechanism that governs the differentiation of many cell types. Most existing mathematical models of Notch-Delta signaling are based on a feedback loop between Notch and Delta leading to lateral inhibition of neighboring cells. These models result in a checkerboard spatial pattern whereby adjacent cells express opposing levels of Notch and Delta, leading to alternate cell fates. However, a growing body of biological evidence suggests that Notch-Delta signaling produces other patterns that are not checkerboard, and therefore a new model is needed. Here, we present an expanded Notch-Delta model that builds upon previous models, adding a local Notch activity gradient, which affects long-range patterning, and the activity of a regulatory microRNA. This model is motivated by our experiments in the ascidian Ciona intestinalis showing that the peripheral sensory neurons, whose specification is in part regulated by the coordinate activity of Notch-Delta signaling and the microRNA miR-124, exhibit a sparse spatial pattern whereby consecutive neurons may be spaced over a dozen cells apart. We perform rigorous stability and bifurcation analyses, and demonstrate that our model is able to accurately explain and reproduce the neuronal pattern in Ciona. Using Monte Carlo simulations of our model along with miR-124 transgene over-expression assays, we demonstrate that the activity of miR-124 can be incorporated into the Notch decay rate parameter of our model. Finally, we motivate the general applicability of our model to Notch-Delta signaling in other animals by providing evidence that microRNAs regulate Notch-Delta signaling in analogous cell types in other organisms, and by discussing evidence in other organisms of sparse spatial patterns in tissues where Notch-Delta signaling is active.

  8. The effect of intra-abdominal hypertension incorporating severe acute pancreatitis in a porcine model.

    Directory of Open Access Journals (Sweden)

    Lu Ke

    Full Text Available INTRODUCTION: Abdominal compartment syndrome (ACS and intra abdominal hypertension(IAH are common clinical findings in patients with severe acute pancreatitis(SAP. It is thought that an increased intra abdominal pressure(IAP is associated with poor prognosis in SAP patients. But the detailed effect of IAH/ACS on different organ system is not clear. The aim of this study was to assess the effect of SAP combined with IAH on hemodynamics, systemic oxygenation, and organ damage in a 12 h lasting porcine model. MEASUREMENTS AND METHODS: Following baseline registrations, a total of 30 animals were divided into 5 groups (6 animals in each group: SAP+IAP30 group, SAP+IAP20 group, SAP group, IAP30 group(sham-operated but without SAP and sham-operated group. We used a N(2 pneumoperitoneum to induce different levels of IAH and retrograde intra-ductal infusion of sodium taurocholate to induce SAP. The investigation period was 12 h. Hemodynamic parameters (CO, HR, MAP, CVP, urine output, oxygenation parameters(e.g., S(vO(2, PO(2, PaCO(2, peak inspiratory pressure, as well as serum parameters (e.g., ALT, amylase, lactate, creatinine were recorded. Histological examination of liver, intestine, pancreas, and lung was performed. MAIN RESULTS: Cardiac output significantly decreased in the SAP+IAH animals compared with other groups. Furthermore, AST, creatinine, SUN and lactate showed similar increasing tendency paralleled with profoundly decrease in S(vO(2. The histopathological analyses also revealed higher grade injury of liver, intestine, pancreas and lung in the SAP+IAH groups. However, few differences were found between the two SAP+IAH groups with different levels of IAP. CONCLUSIONS: Our newly developed porcine SAP+IAH model demonstrated that there were remarkable effects on global hemodynamics, oxygenation and organ function in response to sustained IAH of 12 h combined with SAP. Moreover, our model should be helpful to study the mechanisms of IAH

  9. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    Directory of Open Access Journals (Sweden)

    Maithreyi Subramaniam

    2016-01-01

    Full Text Available This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The findings showed most students performed slightly better than average in the critical analyses and performed best in selecting analysis among the four dimensions assessed. In the context of the students’ studio practice and critical ability, findings showed there are some connections between the students’ art critical ability and studio practice.

  10. Etoposide Incorporated into Camel Milk Phospholipids Liposomes Shows Increased Activity against Fibrosarcoma in a Mouse Model

    Directory of Open Access Journals (Sweden)

    Hamzah M. Maswadeh

    2015-01-01

    Full Text Available Phospholipids were isolated from camel milk and identified by using high performance liquid chromatography and gas chromatography-mass spectrometry (GC/MS. Anticancer drug etoposide (ETP was entrapped in liposomes, prepared from camel milk phospholipids, to determine its activity against fibrosarcoma in a murine model. Fibrosarcoma was induced in mice by injecting benzopyrene (BAP and tumor-bearing mice were treated with various formulations of etoposide, including etoposide entrapped camel milk phospholipids liposomes (ETP-Cam-liposomes and etoposide-loaded DPPC-liposomes (ETP-DPPC-liposomes. The tumor-bearing mice treated with ETP-Cam-liposomes showed slow progression of tumors and increased survival compared to free ETP or ETP-DPPC-liposomes. These results suggest that ETP-Cam-liposomes may prove to be a better drug delivery system for anticancer drugs.

  11. Atomistic modelling study of lanthanide incorporation in the crystal lattice of an apatite

    International Nuclear Information System (INIS)

    Louis-Achille, V.

    1999-01-01

    Studies of natural and synthetic apatites allow to propose such crystals as matrix for nuclear waste storage. The neodymium substituted britholite, Ca 9 Nd(PO 4 ) 5 (SiO 4 )F 2 . is a model for the trivalent actinide storage Neodymium can be substituted in two types of sites. The aim of this thesis is to compare the chemical nature of this two sites in fluoro-apatite Ca 9 (PO 4 ) 6 F 2 and then in britholite, using ab initio atomistic modeling. Two approaches are used: one considers the infinite crystals and the second considers clusters. The calculations of the electronic structure for both were performed using Kohn and Sham density functional theory in the local approximation. For solids, pseudopotentials were used, and wave functions are expanded in plane waves. For clusters, a frozen core approximation was used, and the wave functions are expanded in a linear combination of Slater type atomic orbitals. The pseudopotential is semi-relativistic for neodymium, and the Hamiltonian is scalar relativistic for the clusters. The validation of the solid approach is performed using two test cases: YPO 4 and ScPO 4 . Two numerical tools were developed to compute electronic deformation density map, and calculate partial density of stases. A full optimisation of the lattice parameters with a relaxation of the atomic coordinates leads to correct structural and thermodynamic properties for the fluoro-apatite, compared to experience. The electronic deformation density maps do not show any significant differences. between the two calcium sites. but Mulliken analysis on the solid and on the clusters point out the more ionic behavior of the calcium in site 2. A neodymium substituted britholite is then studied. Neodymium location only induces local modifications in; the crystalline structure and few changes in the formation enthalpy. The electronic study points out an increase of the covalent character the bonding involving neodymium compared with the one related to calcium

  12. An evaluation of a paediatric radiation oncology teaching programme incorporating a SCORPIO teaching model

    International Nuclear Information System (INIS)

    Ahern, Verity

    2011-01-01

    Full text: Many radiation oncology registrars have no exposure to paedi atrics during their training, To address this, the Paediatric Special Interest Group of the Royal Australian and New Zealand College of Radiologists has convened a biennial teaching course since 1997. The 2009 course incorpo rated the use of a Structured, Clinical, Objective-Referenced, Problem orientated, Integrated and Organized (SCORPIO) teaching model for small group tutorials. This study evaluates whether the paediatric radiation oncol ogy curriculum can be adapted to the SCORPIO teaching model and to evaluate the revised course from the registrars' perspective. Methods: Teaching and learning resources included a pre-course reading list, a lecture series programme and a SCORPIO workshop. Three evaluation instruments were developed: an overall Course Evaluation Survey for all participants, a SCORPIO Workshop Survey for registrars and a Teacher's SCORPIO Workshop Survey. Results: Forty-five radiation oncology registrars, 14 radiation therapists and five paediatric oncology registrars attended. Seventy-three per cent (47/64) of all participants completed the Course Evaluation Survey and 95% (38/40) of registrars completed the SCORPIO Workshop Survey. All teachers com pleted the Teacher's SCORPIO Survey (10/10). The overall educational expe rience was rated as good or excellent by 93% (43/47) of respondents. Ratings of satisfaction with lecture sessions were predominantly good or excellent. Registrars gave the SCORPIO workshop high ratings on each of 10 aspects of quality, with 82% allocating an excellent rating overall for the SCORPIO activity. Both registrars and teachers recommended more time for the SCORPIO stations. Conclusions: The 2009 course met the educational needs of the radiation oncology registrars and the SCORPIO workshop was a highly valued educa tional component.

  13. Lyssavirus infection: 'low dose, multiple exposure' in the mouse model.

    Science.gov (United States)

    Banyard, Ashley C; Healy, Derek M; Brookes, Sharon M; Voller, Katja; Hicks, Daniel J; Núñez, Alejandro; Fooks, Anthony R

    2014-03-06

    The European bat lyssaviruses (EBLV-1 and EBLV-2) are zoonotic pathogens present within bat populations across Europe. The maintenance and transmission of lyssaviruses within bat colonies is poorly understood. Cases of repeated isolation of lyssaviruses from bat roosts have raised questions regarding the maintenance and intraspecies transmissibility of these viruses within colonies. Furthermore, the significance of seropositive bats in colonies remains unclear. Due to the protected nature of European bat species, and hence restrictions to working with the natural host for lyssaviruses, this study analysed the outcome following repeat inoculation of low doses of lyssaviruses in a murine model. A standardized dose of virus, EBLV-1, EBLV-2 or a 'street strain' of rabies (RABV), was administered via a peripheral route to attempt to mimic what is hypothesized as natural infection. Each mouse (n=10/virus/group/dilution) received four inoculations, two doses in each footpad over a period of four months, alternating footpad with each inoculation. Mice were tail bled between inoculations to evaluate antibody responses to infection. Mice succumbed to infection after each inoculation with 26.6% of mice developing clinical disease following the initial exposure across all dilutions (RABV, 32.5% (n=13/40); EBLV-1, 35% (n=13/40); EBLV-2, 12.5% (n=5/40)). Interestingly, the lowest dose caused clinical disease in some mice upon first exposure ((RABV, 20% (n=2/10) after first inoculation; RABV, 12.5% (n=1/8) after second inoculation; EBLV-2, 10% (n=1/10) after primary inoculation). Furthermore, five mice developed clinical disease following the second exposure to live virus (RABV, n=1; EBLV-1, n=1; EBLV-2, n=3) although histopathological examination indicated that the primary inoculation was the most probably cause of death due to levels of inflammation and virus antigen distribution observed. All the remaining mice (RABV, n=26; EBLV-1, n=26; EBLV-2, n=29) survived the tertiary and

  14. Incorporating 3D-printing technology in the design of head-caps and electrode drives for recording neurons in multiple brain regions.

    Science.gov (United States)

    Headley, Drew B; DeLucca, Michael V; Haufler, Darrell; Paré, Denis

    2015-04-01

    Recent advances in recording and computing hardware have enabled laboratories to record the electrical activity of multiple brain regions simultaneously. Lagging behind these technical advances, however, are the methods needed to rapidly produce microdrives and head-caps that can flexibly accommodate different recording configurations. Indeed, most available designs target single or adjacent brain regions, and, if multiple sites are targeted, specially constructed head-caps are used. Here, we present a novel design style, for both microdrives and head-caps, which takes advantage of three-dimensional printing technology. This design facilitates targeting of multiple brain regions in various configurations. Moreover, the parts are easily fabricated in large quantities, with only minor hand-tooling and finishing required. Copyright © 2015 the American Physiological Society.

  15. Incorporation of cooling-induced crystallisation into a 2-dimensional axisymmetric conduit heat flow model

    Science.gov (United States)

    Heptinstall, D. A.; Neuberg, J. W.; Bouvet de Maisonneuve, C.; Collinson, A.; Taisne, B.; Morgan, D. J.

    2015-12-01

    Heat flow models can bring new insights into the thermal and rheological evolution of volcanic systems. We shall investigate the thermal processes and timescales in a crystallizing, static magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/°C (runs 1 & 3) and 0.2MPa/°C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69e5 J/kg*K, 9.32e5 J/kg*K, and 9.49e5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the center of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10 m depth, it takes 4.1-9.2 years for the magma column to cool over 108-130oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and dominant latent heat producing crystallizing phases (Quartz), where run 1 cools fastest and run 3 cools slowest. Surface cooling by comparison has the strongest influence on the upper tens of meters in all

  16. Incorporation of cooling-induced crystallization into a 2-dimensional axisymmetric conduit heat flow model

    Science.gov (United States)

    Heptinstall, David; Bouvet de Maisonneuve, Caroline; Neuberg, Jurgen; Taisne, Benoit; Collinson, Amy

    2016-04-01

    Heat flow models can bring new insights into the thermal and rheological evolution of volcanic 3 systems. We shall investigate the thermal processes and timescales in a crystallizing, static 4 magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/° C (runs 1 & 3) and 0.2MPa/° C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69E5 J/kg*K, 9.32E5 J/kg*K, and 9.49E5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the centre of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10m depth, it takes 4.1-9.2 years for the magma column to cool by 108-131oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and the dominant latent heat producing crystallizing phase, Albite-rich Plagioclase Feldspar. Run 1 is shown to cool fastest and run 3 cool the slowest, with surface emissivity having the strongest cooling

  17. Incorporating the CALPHAD sublattice approach of ordering into the phase-field model with finite interface dissipation

    International Nuclear Information System (INIS)

    Zhang, Lijun; Stratmann, Matthias; Du, Yong; Sundman, Bo; Steinbach, Ingo

    2015-01-01

    A new approach to incorporate the sublattice models in the CALPHAD (CALculation of PHAse Diagram) formalism directly into the phase-field formalism is developed. In binary alloys, the sublattice models can be classified into two types (i.e., “Type I” and “Type II”), depending on whether a direct one-to-one relation between the element site fraction in the CALPHAD database and the phase concentration in the phase-field model exists (Type I), or not (Type II). For “Type II” sublattice models, the specific site fractions, corresponding to a given mole fraction, have to be established via internal relaxation between different sublattices. Internal minimization of sublattice occupancy and solute evolution during microstructure transformation leads, in general, to a solution superior to the separate solution of the individual problems. The present coupling technique is validated for Fe–C and Ni–Al alloys. Finally, the model is extended into multicomponent alloys and applied to simulate the nucleation process of VC monocarbide from austenite matrix in a steel containing vanadium

  18. Incorporation of expert variability into breast cancer treatment recommendation in designing clinical protocol guided fuzzy rule system models.

    Science.gov (United States)

    Garibaldi, Jonathan M; Zhou, Shang-Ming; Wang, Xiao-Ying; John, Robert I; Ellis, Ian O

    2012-06-01

    It has been often demonstrated that clinicians exhibit both inter-expert and intra-expert variability when making difficult decisions. In contrast, the vast majority of computerized models that aim to provide automated support for such decisions do not explicitly recognize or replicate this variability. Furthermore, the perfect consistency of computerized models is often presented as a de facto benefit. In this paper, we describe a novel approach to incorporate variability within a fuzzy inference system using non-stationary fuzzy sets in order to replicate human variability. We apply our approach to a decision problem concerning the recommendation of post-operative breast cancer treatment; specifically, whether or not to administer chemotherapy based on assessment of five clinical variables: NPI (the Nottingham Prognostic Index), estrogen receptor status, vascular invasion, age and lymph node status. In doing so, we explore whether such explicit modeling of variability provides any performance advantage over a more conventional fuzzy approach, when tested on a set of 1310 unselected cases collected over a fourteen year period at the Nottingham University Hospitals NHS Trust, UK. The experimental results show that the standard fuzzy inference system (that does not model variability) achieves overall agreement to clinical practice around 84.6% (95% CI: 84.1-84.9%), while the non-stationary fuzzy model can significantly increase performance to around 88.1% (95% CI: 88.0-88.2%), psystems in any application domain. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Computational cardiology: the bidomain based modified Hill model incorporating viscous effects for cardiac defibrillation

    Science.gov (United States)

    Cansız, Barış; Dal, Hüsnü; Kaliske, Michael

    2017-10-01

    Working mechanisms of the cardiac defibrillation are still in debate due to the limited experimental facilities and one-third of patients even do not respond to cardiac resynchronization therapy. With an aim to develop a milestone towards reaching the unrevealed mechanisms of the defibrillation phenomenon, we propose a bidomain based finite element formulation of cardiac electromechanics by taking into account the viscous effects that are disregarded by many researchers. To do so, the material is deemed as an electro-visco-active material and described by the modified Hill model (Cansız et al. in Comput Methods Appl Mech Eng 315:434-466, 2017). On the numerical side, we utilize a staggered solution method, where the elliptic and parabolic part of the bidomain equations and the mechanical field are solved sequentially. The comparative simulations designate that the viscoelastic and elastic formulations lead to remarkably different outcomes upon an externally applied electric field to the myocardial tissue. Besides, the achieved framework requires significantly less computational time and memory compared to monolithic schemes without loss of stability for the presented examples.

  20. Incorporating Vibration Test Results for the Advanced Stirling Convertor into the System Dynamic Model

    Science.gov (United States)

    Meer, David W.; Lewandowski, Edward J.

    2010-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin Corporation (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Convertors (ASC) at NASA GRC undergo a vibration test sequence intended to simulate the vibration history that an ASC would experience when used in an ASRG for a space mission. During these tests, a data system collects several performance-related parameters from the convertor under test for health monitoring and analysis. Recently, an additional sensor recorded the slip table position during vibration testing to qualification level. The System Dynamic Model (SDM) integrates Stirling cycle thermodynamics, heat flow, mechanical mass, spring, damper systems, and electrical characteristics of the linear alternator and controller. This Paper presents a comparison of the performance of the ASC when exposed to vibration to that predicted by the SDM when exposed to the same vibration.

  1. Incorporating the gut microbiota into models of human and non-human primate ecology and evolution.

    Science.gov (United States)

    Amato, Katherine R

    2016-01-01

    The mammalian gut is home to a diverse community of microbes. Advances in technology over the past two decades have allowed us to examine this community, the gut microbiota, in more detail, revealing a wide range of influences on host nutrition, health, and behavior. These host-gut microbe interactions appear to shape host plasticity and fitness in a variety of contexts, and therefore represent a key factor missing from existing models of human and non-human primate ecology and evolution. However, current studies of the gut microbiota tend to include limited contextual data or are clinical, making it difficult to directly test broad anthropological hypotheses. Here, I review what is known about the animal gut microbiota and provide examples of how gut microbiota research can be integrated into the study of human and non-human primate ecology and evolution with targeted data collection. Specifically, I examine how the gut microbiota may impact primate diet, energetics, disease resistance, and cognition. While gut microbiota research is proliferating rapidly, especially in the context of humans, there remain important gaps in our understanding of host-gut microbe interactions that will require an anthropological perspective to fill. Likewise, gut microbiota research will be an important tool for filling remaining gaps in anthropological research. © 2016 Wiley Periodicals, Inc.

  2. Searching for the true diet of marine predators: incorporating Bayesian priors into stable isotope mixing models.

    Directory of Open Access Journals (Sweden)

    André Chiaradia

    Full Text Available Reconstructing the diet of top marine predators is of great significance in several key areas of applied ecology, requiring accurate estimation of their true diet. However, from conventional stomach content analysis to recent stable isotope and DNA analyses, no one method is bias or error free. Here, we evaluated the accuracy of recent methods to estimate the actual proportion of a controlled diet fed to a top-predator seabird, the Little penguin (Eudyptula minor. We combined published DNA data of penguins scats with blood plasma δ(15N and δ(13C values to reconstruct the diet of individual penguins fed experimentally. Mismatch between controlled (true ingested diet and dietary estimates obtained through the separately use of stable isotope and DNA data suggested some degree of differences in prey assimilation (stable isotope and digestion rates (DNA analysis. In contrast, combined posterior isotope mixing model with DNA Bayesian priors provided the closest match to the true diet. We provided the first evidence suggesting that the combined use of these complementary techniques may provide better estimates of the actual diet of top marine predators- a powerful tool in applied ecology in the search for the true consumed diet.

  3. Modeling the suppression of boron transient enhanced diffusion in silicon by substitutional carbon incorporation

    Science.gov (United States)

    Ngau, Julie L.; Griffin, Peter B.; Plummer, James D.

    2001-08-01

    Recent work has indicated that the suppression of boron transient enhanced diffusion (TED) in carbon-rich Si is caused by nonequilibrium Si point defect concentrations, specifically the undersaturation of Si self-interstitials, that result from the coupled out-diffusion of carbon interstitials via the kick-out and Frank-Turnbull reactions. This study of boron TED reduction in Si1-x-yGexCy during 750 °C inert anneals has revealed that the use of an additional reaction that further reduces the Si self-interstitial concentration is necessary to describe accurately the time evolved diffusion behavior of boron. In this article, we present a comprehensive model which includes {311} defects, boron-interstitial clusters, a carbon kick-out reaction, a carbon Frank-Turnbull reaction, and a carbon interstitial-carbon substitutional (CiCs) pairing reaction that successfully simulates carbon suppression of boron TED at 750 °C for anneal times ranging from 10 s to 60 min.

  4. Modeling the suppression of boron transient enhanced diffusion in silicon by substitutional carbon incorporation

    International Nuclear Information System (INIS)

    Ngau, Julie L.; Griffin, Peter B.; Plummer, James D.

    2001-01-01

    Recent work has indicated that the suppression of boron transient enhanced diffusion (TED) in carbon-rich Si is caused by nonequilibrium Si point defect concentrations, specifically the undersaturation of Si self-interstitials, that result from the coupled out-diffusion of carbon interstitials via the kick-out and Frank--Turnbull reactions. This study of boron TED reduction in Si 1-x-y Ge x C y during 750 o C inert anneals has revealed that the use of an additional reaction that further reduces the Si self-interstitial concentration is necessary to describe accurately the time evolved diffusion behavior of boron. In this article, we present a comprehensive model which includes {311} defects, boron-interstitial clusters, a carbon kick-out reaction, a carbon Frank--Turnbull reaction, and a carbon interstitial-carbon substitutional (C i C s ) pairing reaction that successfully simulates carbon suppression of boron TED at 750 o C for anneal times ranging from 10 s to 60 min. copyright 2001 American Institute of Physics

  5. Calibrating the BOLD signal during a motor task using an extended fusion model incorporating DOT, BOLD and ASL data

    Science.gov (United States)

    Yücel, Meryem A.; Huppert, Theodore J.; Boas, David A.; Gagnon, Louis

    2012-01-01

    Multimodal imaging improves the accuracy of the localization and the quantification of brain activation when measuring different manifestations of the hemodynamic response associated with cerebral activity. In this study, we incorporated cerebral blood flow (CBF) changes measured with arterial spin labeling (ASL), Diffuse Optical Tomography (DOT) and blood oxygen level-dependent (BOLD) recordings to reconstruct changes in oxy- (ΔHbO2) and deoxyhemoglobin (ΔHbR). Using the Grubb relation between relative changes in CBF and cerebral blood volume (CBV), we incorporated the ASL measurement as a prior to the total hemoglobin concentration change (ΔHbT). We applied this ASL fusion model to both synthetic data and experimental multimodal recordings during a 2-sec finger-tapping task. Our results show that the new approach is very powerful in estimating ΔHbO2 and ΔHbR with high spatial and quantitative accuracy. Moreover, our approach allows the computation of baseline total hemoglobin concentration (HbT0) as well as of the BOLD calibration factor M on a single subject basis. We obtained an average HbT0 of 71 μM, an average M value of 0.18 and an average increase of 13 % in cerebral metabolic rate of oxygen (CMRO2), all of which are in agreement with values previously reported in the literature. Our method yields an independent measurement of M, which provides an alternative measurement to validate the hypercapnic calibration of the BOLD signal. PMID:22546318

  6. Rapid installation of numerical models in multiple parent codes

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  7. Development of Advanced Continuum Models that Incorporate Nanomechanical Deformation into Engineering Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Zimmerman, Jonathan A.; Jones, Reese E.; Templeton, Jeremy Alan; McDowell, David L.; Mayeur, Jason R.; Tucker, Garritt J.; Bammann, Douglas J.; Gao, Huajian

    2008-09-01

    Materials with characteristic structures at nanoscale sizes exhibit significantly different mechani-cal responses from those predicted by conventional, macroscopic continuum theory. For example,nanocrystalline metals display an inverse Hall-Petch effect whereby the strength of the materialdecreases with decreasing grain size. The origin of this effect is believed to be a change in defor-mation mechanisms from dislocation motion across grains and pileup at grain boundaries at mi-croscopic grain sizes to rotation of grains and deformation within grain boundary interface regionsfor nanostructured materials. These rotational defects are represented by the mathematical conceptof disclinations. The ability to capture these effects within continuum theory, thereby connectingnanoscale materials phenomena and macroscale behavior, has eluded the research community.The goal of our project was to develop a consistent theory to model both the evolution ofdisclinations and their kinetics. Additionally, we sought to develop approaches to extract contin-uum mechanical information from nanoscale structure to verify any developed continuum theorythat includes dislocation and disclination behavior. These approaches yield engineering-scale ex-pressions to quantify elastic and inelastic deformation in all varieties of materials, even those thatpossess highly directional bonding within their molecular structures such as liquid crystals, cova-lent ceramics, polymers and biological materials. This level of accuracy is critical for engineeringdesign and thermo-mechanical analysis is performed in micro- and nanosystems. The researchproposed here innovates on how these nanoscale deformation mechanisms should be incorporatedinto a continuum mechanical formulation, and provides the foundation upon which to develop ameans for predicting the performance of advanced engineering materials.4 AcknowledgmentThe authors acknowledge helpful discussions with Farid F. Abraham, Youping Chen, Terry J

  8. Incorporating transportation network modeling tools within transportation economic impact studies of disasters

    Directory of Open Access Journals (Sweden)

    Yi Wen

    2014-08-01

    Full Text Available Transportation system disruption due to a disaster results in "ripple effects" throughout the entire transportation system of a metropolitan region. Many researchers have focused on the economic costs of transportation system disruptions in transportation-related industries, specifïcally within commerce and logistics, in the assessment of the regional economic costs. However, the foundation of an assessment of the regional economic costs of a disaster needs to include the evaluation of consumer surplus in addition to the direct cost for reconstruction of the regional transportation system. The objective of this study is to propose a method to estimate the regional consumer surplus based on indirect economic costs of a disaster on intermodal transportation systems in the context of diverting vehicles and trains. The computational methods used to assess the regional indirect economic costs sustained by the highway and railroad system can utilize readily available state departments of transportation (DOTs and metropolitan planning organizations (MPOs traffic models allowing prioritization of regional recovery plans after a disaster and strengthening of infrastructure before a disaster. Hurricane Katrina is one of the most devastating hurricanes in the history of the United States. Due to the significance of Hurricane Katrina, a case study is presented to evaluate consumer surplus in the Gulf Coast Region of Mississippi. Results from the case study indicate the costs of rerouting and congestion delays in the regional highway system and the rent costs of right-of-way in the regional railroad system are major factors of the indirect costs in the consumer surplus.

  9. A quantitative systems pharmacology approach, incorporating a novel liver model, for predicting pharmacokinetic drug-drug interactions.

    Science.gov (United States)

    Cherkaoui-Rbati, Mohammed H; Paine, Stuart W; Littlewood, Peter; Rauch, Cyril

    2017-01-01

    All pharmaceutical companies are required to assess pharmacokinetic drug-drug interactions (DDIs) of new chemical entities (NCEs) and mathematical prediction helps to select the best NCE candidate with regard to adverse effects resulting from a DDI before any costly clinical studies. Most current models assume that the liver is a homogeneous organ where the majority of the metabolism occurs. However, the circulatory system of the liver has a complex hierarchical geometry which distributes xenobiotics throughout the organ. Nevertheless, the lobule (liver unit), located at the end of each branch, is composed of many sinusoids where the blood flow can vary and therefore creates heterogeneity (e.g. drug concentration, enzyme level). A liver model was constructed by describing the geometry of a lobule, where the blood velocity increases toward the central vein, and by modeling the exchange mechanisms between the blood and hepatocytes. Moreover, the three major DDI mechanisms of metabolic enzymes; competitive inhibition, mechanism based inhibition and induction, were accounted for with an undefined number of drugs and/or enzymes. The liver model was incorporated into a physiological-based pharmacokinetic (PBPK) model and simulations produced, that in turn were compared to ten clinical results. The liver model generated a hierarchy of 5 sinusoidal levels and estimated a blood volume of 283 mL and a cell density of 193 × 106 cells/g in the liver. The overall PBPK model predicted the pharmacokinetics of midazolam and the magnitude of the clinical DDI with perpetrator drug(s) including spatial and temporal enzyme levels changes. The model presented herein may reduce costs and the use of laboratory animals and give the opportunity to explore different clinical scenarios, which reduce the risk of adverse events, prior to costly human clinical studies.

  10. A quantitative systems pharmacology approach, incorporating a novel liver model, for predicting pharmacokinetic drug-drug interactions.

    Directory of Open Access Journals (Sweden)

    Mohammed H Cherkaoui-Rbati

    Full Text Available All pharmaceutical companies are required to assess pharmacokinetic drug-drug interactions (DDIs of new chemical entities (NCEs and mathematical prediction helps to select the best NCE candidate with regard to adverse effects resulting from a DDI before any costly clinical studies. Most current models assume that the liver is a homogeneous organ where the majority of the metabolism occurs. However, the circulatory system of the liver has a complex hierarchical geometry which distributes xenobiotics throughout the organ. Nevertheless, the lobule (liver unit, located at the end of each branch, is composed of many sinusoids where the blood flow can vary and therefore creates heterogeneity (e.g. drug concentration, enzyme level. A liver model was constructed by describing the geometry of a lobule, where the blood velocity increases toward the central vein, and by modeling the exchange mechanisms between the blood and hepatocytes. Moreover, the three major DDI mechanisms of metabolic enzymes; competitive inhibition, mechanism based inhibition and induction, were accounted for with an undefined number of drugs and/or enzymes. The liver model was incorporated into a physiological-based pharmacokinetic (PBPK model and simulations produced, that in turn were compared to ten clinical results. The liver model generated a hierarchy of 5 sinusoidal levels and estimated a blood volume of 283 mL and a cell density of 193 × 106 cells/g in the liver. The overall PBPK model predicted the pharmacokinetics of midazolam and the magnitude of the clinical DDI with perpetrator drug(s including spatial and temporal enzyme levels changes. The model presented herein may reduce costs and the use of laboratory animals and give the opportunity to explore different clinical scenarios, which reduce the risk of adverse events, prior to costly human clinical studies.

  11. Stochastic modeling of pitting corrosion: A new model for initiation and growth of multiple corrosion pits

    International Nuclear Information System (INIS)

    Valor, A.; Caleyo, F.; Alfonso, L.; Rivas, D.; Hallen, J.M.

    2007-01-01

    In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion

  12. Stochastic modeling of pitting corrosion: A new model for initiation and growth of multiple corrosion pits

    Energy Technology Data Exchange (ETDEWEB)

    Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado, 10400 Havana (Cuba); Caleyo, F. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)]. E-mail: fcaleyo@gmail.com; Alfonso, L. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Rivas, D. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico); Hallen, J.M. [Departamento de Ingenieria, Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)

    2007-02-15

    In this work, a new stochastic model capable of simulating pitting corrosion is developed and validated. Pitting corrosion is modeled as the combination of two stochastic processes: pit initiation and pit growth. Pit generation is modeled as a nonhomogeneous Poisson process, in which induction time for pit initiation is simulated as the realization of a Weibull process. In this way, the exponential and Weibull distributions can be considered as the possible distributions for pit initiation time. Pit growth is simulated using a nonhomogeneous Markov process. Extreme value statistics is used to find the distribution of maximum pit depths resulting from the combination of the initiation and growth processes for multiple pits. The proposed model is validated using several published experiments on pitting corrosion. It is capable of reproducing the experimental observations with higher quality than the stochastic models available in the literature for pitting corrosion.

  13. Incorporating Multiple-Choice Questions into an AACSB Assurance of Learning Process: A Course-Embedded Assessment Application to an Introductory Finance Course

    Science.gov (United States)

    Santos, Michael R.; Hu, Aidong; Jordan, Douglas

    2014-01-01

    The authors offer a classification technique to make a quantitative skills rubric more operational, with the groupings of multiple-choice questions to match the student learning levels in knowledge, calculation, quantitative reasoning, and analysis. The authors applied this classification technique to the mid-term exams of an introductory finance…

  14. Towards a Predictive Thermodynamic Model of Oxidation States of Uranium Incorporated in Fe (hydr) oxides

    Energy Technology Data Exchange (ETDEWEB)

    Bagus, Paul S. [Univ. of North Texas, Denton, TX (United States)

    2013-01-01

    -Level Excited States: Consequences For X-Ray Absorption Spectroscopy”, J. Elec. Spectros. and Related Phenom., 200, 174 (2015) describes our first application of these methods. As well as applications to problems and materials of direct interest for our PNNL colleagues, we have pursued applications of fundamental theoretical significance for the analysis and interpretation of XPS and XAS spectra. These studies are important for the development of the fields of core-level spectroscopies as well as to advance our capabilities for applications of interest to our PNNL colleagues. An excellent example is our study of the surface core-level shifts, SCLS, for the surface and bulk atoms of an oxide that provides a new approach to understanding how the surface electronic of oxides differs from that in the bulk of the material. This work has the potential to lead to a new key to understanding the reactivity of oxide surfaces. Our theoretical studies use cluster models with finite numbers of atoms to describe the properties of condensed phases and crystals. This approach has allowed us to focus on the local atomistic, chemical interactions. For these clusters, we obtain orbitals and spinors through the solution of the Hartree-Fock, HF, and the fully relativistic Dirac HF equations. These orbitals are used to form configuration mixing wavefunctions which treat the many-body effects responsible for the open shell angular momentum coupling and for the satellites of the core-level spectra. Our efforts have been in two complementary directions. As well as the applications described above, we have placed major emphasis on the enhancement and extension of our theoretical and computational capabilities so that we can treat complex systems with a greater range of many-body effects. Noteworthy accomplishments in terms of method development and enhancement have included: (1) An improvement in our treatment of the large matrices that must be handled when many-body effects are treated. (2

  15. A New Paradigm For Modeling Fault Zone Inelasticity: A Multiscale Continuum Framework Incorporating Spontaneous Localization and Grain Fragmentation.

    Science.gov (United States)

    Elbanna, A. E.

    2015-12-01

    The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.

  16. A bi-level integrated generation-transmission planning model incorporating the impacts of demand response by operation simulation

    International Nuclear Information System (INIS)

    Zhang, Ning; Hu, Zhaoguang; Springer, Cecilia; Li, Yanning; Shen, Bo

    2016-01-01

    Highlights: • We put forward a novel bi-level integrated power system planning model. • Generation expansion planning and transmission expansion planning are combined. • The effects of two sorts of demand response in reducing peak load are considered. • Operation simulation is conducted to reflect the actual effects of demand response. • The interactions between the two levels can guarantee a reasonably optimal result. - Abstract: If all the resources in power supply side, transmission part, and power demand side are considered together, the optimal expansion scheme from the perspective of the whole system can be achieved. In this paper, generation expansion planning and transmission expansion planning are combined into one model. Moreover, the effects of demand response in reducing peak load are taken into account in the planning model, which can cut back the generation expansion capacity and transmission expansion capacity. Existing approaches to considering demand response for planning tend to overestimate the impacts of demand response on peak load reduction. These approaches usually focus on power reduction at the moment of peak load without considering the situations in which load demand at another moment may unexpectedly become the new peak load due to demand response. These situations are analyzed in this paper. Accordingly, a novel approach to incorporating demand response in a planning model is proposed. A modified unit commitment model with demand response is utilized. The planning model is thereby a bi-level model with interactions between generation-transmission expansion planning and operation simulation to reflect the actual effects of demand response and find the reasonably optimal planning result.

  17. An Additive-Multiplicative Restricted Mean Residual Life Model

    DEFF Research Database (Denmark)

    Mansourvar, Zahra; Martinussen, Torben; Scheike, Thomas H.

    2016-01-01

    mean residual life model to study the association between the restricted mean residual life function and potential regression covariates in the presence of right censoring. This model extends the proportional mean residual life model using an additive model as its covariate dependent baseline....... For the suggested model, some covariate effects are allowed to be time-varying. To estimate the model parameters, martingale estimating equations are developed, and the large sample properties of the resulting estimators are established. In addition, to assess the adequacy of the model, we investigate a goodness...

  18. Extending positive CLASS results across multiple instructors and multiple classes of Modeling Instruction

    Science.gov (United States)

    Brewe, Eric; Traxler, Adrienne; de la Garza, Jorge; Kramer, Laird H.

    2013-12-01

    We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26-0.64). These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.

  19. Extending positive CLASS results across multiple instructors and multiple classes of Modeling Instruction

    Directory of Open Access Journals (Sweden)

    Eric Brewe

    2013-10-01

    Full Text Available We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26–0.64. These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.

  20. Genome-wide prediction models that incorporate de novo GWAS are a powerful new tool for tropical rice improvement

    Science.gov (United States)

    Spindel, J E; Begum, H; Akdemir, D; Collard, B; Redoña, E; Jannink, J-L; McCouch, S

    2016-01-01

    To address the multiple challenges to food security posed by global climate change, population growth and rising incomes, plant breeders are developing new crop varieties that can enhance both agricultural productivity and environmental sustainability. Current breeding practices, however, are unable to keep pace with demand. Genomic selection (GS) is a new technique that helps accelerate the rate of genetic gain in breeding by using whole-genome data to predict the breeding value of offspring. Here, we describe a new GS model that combines RR-BLUP with markers fit as fixed effects selected from the results of a genome-wide-association study (GWAS) on the RR-BLUP training data. We term this model GS + de novo GWAS. In a breeding population of tropical rice, GS + de novo GWAS outperformed six other models for a variety of traits and in multiple environments. On the basis of these results, we propose an extended, two-part breeding design that can be used to efficiently integrate novel variation into elite breeding populations, thus expanding genetic diversity and enhancing the potential for sustainable productivity gains. PMID:26860200