WorldWideScience

Sample records for model incorporates typical

  1. Modelling object typicality in description logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-12-01

    Full Text Available in the context under consideration, than those lower down. For any given class C, we assume that all objects in the appli- cation domain that are in (the interpretation of) C are more typical of C than those not in C. This is a technical construction which... to be modular partial orders, i.e. reflexive, transitive, anti- symmetric relations such that, for all a, b, c in ∆I , if a and b are incomparable and a is strictly below c, then b is also strictly below c. Modular partial orders have the effect...

  2. Typical NRC inspection procedures for model plant

    International Nuclear Information System (INIS)

    Blaylock, J.

    1984-01-01

    A summary of NRC inspection procedures for a model LEU fuel fabrication plant is presented. Procedures and methods for combining inventory data, seals, measurement techniques, and statistical analysis are emphasized

  3. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model....

  4. Incorporating groundwater flow into the WEPP model

    Science.gov (United States)

    William Elliot; Erin Brooks; Tim Link; Sue Miller

    2010-01-01

    The water erosion prediction project (WEPP) model is a physically-based hydrology and erosion model. In recent years, the hydrology prediction within the model has been improved for forest watershed modeling by incorporating shallow lateral flow into watershed runoff prediction. This has greatly improved WEPP's hydrologic performance on small watersheds with...

  5. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  6. Aeroelastic Calculations Using CFD for a Typical Business Jet Model

    Science.gov (United States)

    Gibbons, Michael D.

    1996-01-01

    Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.

  7. Incorporating interfacial phenomena in solidification models

    Science.gov (United States)

    Beckermann, Christoph; Wang, Chao Yang

    1994-01-01

    A general methodology is available for the incorporation of microscopic interfacial phenomena in macroscopic solidification models that include diffusion and convection. The method is derived from a formal averaging procedure and a multiphase approach, and relies on the presence of interfacial integrals in the macroscopic transport equations. In a wider engineering context, these techniques are not new, but their application in the analysis and modeling of solidification processes has largely been overlooked. This article describes the techniques and demonstrates their utility in two examples in which microscopic interfacial phenomena are of great importance.

  8. Incorporating neurophysiological concepts in mathematical thermoregulation models

    Science.gov (United States)

    Kingma, Boris R. M.; Vosselman, M. J.; Frijns, A. J. H.; van Steenhoven, A. A.; van Marken Lichtenbelt, W. D.

    2014-01-01

    Skin blood flow (SBF) is a key player in human thermoregulation during mild thermal challenges. Various numerical models of SBF regulation exist. However, none explicitly incorporates the neurophysiology of thermal reception. This study tested a new SBF model that is in line with experimental data on thermal reception and the neurophysiological pathways involved in thermoregulatory SBF control. Additionally, a numerical thermoregulation model was used as a platform to test the function of the neurophysiological SBF model for skin temperature simulation. The prediction-error of the SBF-model was quantified by root-mean-squared-residual (RMSR) between simulations and experimental measurement data. Measurement data consisted of SBF (abdomen, forearm, hand), core and skin temperature recordings of young males during three transient thermal challenges (1 development and 2 validation). Additionally, ThermoSEM, a thermoregulation model, was used to simulate body temperatures using the new neurophysiological SBF-model. The RMSR between simulated and measured mean skin temperature was used to validate the model. The neurophysiological model predicted SBF with an accuracy of RMSR human thermoregulation models can be equipped with SBF control functions that are based on neurophysiology without loss of performance. The neurophysiological approach in modelling thermoregulation is favourable over engineering approaches because it is more in line with the underlying physiology.

  9. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  10. Incorporating damage mechanics into explosion simulation models

    International Nuclear Information System (INIS)

    Sammis, C.G.

    1993-01-01

    The source region of an underground explosion is commonly modeled as a nested series of shells. In the innermost open-quotes hydrodynamic regimeclose quotes pressures and temperatures are sufficiently high that the rock deforms as a fluid and may be described using a PVT equation of state. Just beyond the hydrodynamic regime, is the open-quotes non-linear regimeclose quotes in which the rock has shear strength but the deformation is nonlinear. This regime extends out to the open-quotes elastic radiusclose quotes beyond which the deformation is linear. In this paper, we develop a model for the non-linear regime in crystalline source rock where the nonlinearity is mostly due to fractures. We divide the non-linear regime into a open-quotes damage regimeclose quotes in which the stresses are sufficiently high to nucleate new fractures from preexisting ones and a open-quotes crack-slidingclose quotes regime where motion on preexisting cracks produces amplitude dependent attenuation and other non-linear effects, but no new cracks are nucleated. The boundary between these two regimes is called the open-quotes damage radius.close quotes The micromechanical damage mechanics recently developed by Ashby and Sammis (1990) is used to write an analytic expression for the damage radius in terms of the initial fracture spectrum of the source rock, and to develop an algorithm which may be used to incorporate damage mechanics into computer source models for the damage regime. Effects of water saturation and loading rate are also discussed

  11. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    Efficient and cost effective transportation and logistics plays a vital role in the supply chains of the modern world’s manufacturers. Global distribution of goods is a very complicated matter as it involves many different distinct planning problems. The focus of this presentation is to demonstrate...... a number of important issues which have been identified when addressing the Distribution Network Design problem from a modelling angle. More specifically, we present an analysis of the research which has been performed in utilizing operational research in developing and optimising distribution systems....

  12. Incorporating direct marketing activity into latent attrition models

    NARCIS (Netherlands)

    Schweidel, David A.; Knox, George

    2013-01-01

    When defection is unobserved, latent attrition models provide useful insights about customer behavior and accurate forecasts of customer value. Yet extant models ignore direct marketing efforts. Response models incorporate the effects of direct marketing, but because they ignore latent attrition,

  13. A Financial Market Model Incorporating Herd Behaviour.

    Science.gov (United States)

    Wray, Christopher M; Bishop, Steven R

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market

  14. A Financial Market Model Incorporating Herd Behaviour.

    Directory of Open Access Journals (Sweden)

    Christopher M Wray

    Full Text Available Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space, and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space, numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock

  15. A Financial Market Model Incorporating Herd Behaviour

    Science.gov (United States)

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents’ accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents’ accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the

  16. Incorporating territory compression into population models

    NARCIS (Netherlands)

    Ridley, J; Komdeur, J; Sutherland, WJ; Sutherland, William J.

    The ideal despotic distribution, whereby the lifetime reproductive success a territory's owner achieves is unaffected by population density, is a mainstay of behaviour-based population models. We show that the population dynamics of an island population of Seychelles warblers (Acrocephalus

  17. Incorporating Context Dependency of Species Interactions in Species Distribution Models.

    Science.gov (United States)

    Lany, Nina K; Zarnetske, Phoebe L; Gouhier, Tarik C; Menge, Bruce A

    2017-07-01

    Species distribution models typically use correlative approaches that characterize the species-environment relationship using occurrence or abundance data for a single species. However, species distributions are determined by both abiotic conditions and biotic interactions with other species in the community. Therefore, climate change is expected to impact species through direct effects on their physiology and indirect effects propagated through their resources, predators, competitors, or mutualists. Furthermore, the sign and strength of species interactions can change according to abiotic conditions, resulting in context-dependent species interactions that may change across space or with climate change. Here, we incorporated the context dependency of species interactions into a dynamic species distribution model. We developed a multi-species model that uses a time-series of observational survey data to evaluate how abiotic conditions and species interactions affect the dynamics of three rocky intertidal species. The model further distinguishes between the direct effects of abiotic conditions on abundance and the indirect effects propagated through interactions with other species. We apply the model to keystone predation by the sea star Pisaster ochraceus on the mussel Mytilus californianus and the barnacle Balanus glandula in the rocky intertidal zone of the Pacific coast, USA. Our method indicated that biotic interactions between P. ochraceus and B. glandula affected B. glandula dynamics across >1000 km of coastline. Consistent with patterns from keystone predation, the growth rate of B. glandula varied according to the abundance of P. ochraceus in the previous year. The data and the model did not indicate that the strength of keystone predation by P. ochraceus varied with a mean annual upwelling index. Balanus glandula cover increased following years with high phytoplankton abundance measured as mean annual chlorophyll-a. M. californianus exhibited the same

  18. True dose from incorporated activities. Models for internal dosimetry

    International Nuclear Information System (INIS)

    Breustedt, B.; Eschner, W.; Nosske, D.

    2012-01-01

    The assessment of doses after incorporation of radionuclides cannot use direct measurements of the doses, as for example dosimetry in external radiation fields. The only observables are activities in the body or in excretions. Models are used to calculate the doses based on the measured activities. The incorporated activities and the resulting doses can vary by more than seven orders of magnitude between occupational and medical exposures. Nevertheless the models and calculations applied in both cases are similar. Since the models for the different applications have been developed independently by ICRP and MIRD different terminologies have been used. A unified terminology is being developed. (orig.)

  19. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  20. Incorporating parametric uncertainty into population viability analysis models

    Science.gov (United States)

    McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.

    2011-01-01

    Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.

  1. "Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Nibbs, Faith G.

    2007-08-24

    While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.

  2. Ex-plant consequence assessment for NUREG-1150: models, typical results, uncertainties

    International Nuclear Information System (INIS)

    Sprung, J.L.

    1988-01-01

    The assessment of ex-plant consequences for NUREG-1150 source terms was performed using the MELCOR Accident Consequence Code System (MACCS). This paper briefly discusses the following elements of MACCS consequence calculations: input data, phenomena modeled, computational framework, typical results, controlling phenomena, and uncertainties. Wherever possible, NUREG-1150 results will be used to illustrate the discussion. 28 references

  3. A comparison of two typical multicyclic models used to forecast the world's conventional oil production

    International Nuclear Information System (INIS)

    Wang Jianliang; Feng Lianyong; Zhao Lin; Snowden, Simon; Wang Xu

    2011-01-01

    This paper introduces two typical multicyclic models: the Hubbert model and the Generalized Weng model. The model-solving process of the two is expounded, and it provides the basis for an empirical analysis of the world's conventional oil production. The results for both show that the world's conventional oil (crude+NGLs) production will reach its peak in 2011 with a production of 30 billion barrels (Gb). In addition, the forecasting effects of these two models, given the same URR are compared, and the intrinsic characteristics of these two models are analyzed. This demonstrates that for specific criteria the multicyclic Generalized Weng model is an improvement on the multicyclic Hubbert model. Finally, based upon the resultant forecast for the world's conventional oil, some suggestions are proposed for China's policy makers. - Highlights: ► Hubbert model and Generalized Weng model are introduced and compared in this article. ► We conclude each model's characteristic and scopes and conditions of applicable. ► We get the same peak production and time of world's oil by applying two models. ► Multicyclic Generalized Weng model is proven slightly better than Hubbert model.

  4. Incorporating nitrogen fixing cyanobacteria in the global biogeochemical model HAMOCC

    Science.gov (United States)

    Paulsen, Hanna; Ilyina, Tatiana; Six, Katharina

    2015-04-01

    Nitrogen fixation by marine diazotrophs plays a fundamental role in the oceanic nitrogen and carbon cycle as it provides a major source of 'new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Since most global biogeochemical models include nitrogen fixation only diagnostically, they are not able to capture its spatial pattern sufficiently. Here we present the incorporation of an explicit, dynamic representation of diazotrophic cyanobacteria and the corresponding nitrogen fixation in the global ocean biogeochemical model HAMOCC (Hamburg Ocean Carbon Cycle model), which is part of the Max Planck Institute for Meteorology Earth system model (MPI-ESM). The parameterization of the diazotrophic growth is thereby based on available knowledge about the cyanobacterium Trichodesmium spp., which is considered as the most significant pelagic nitrogen fixer. Evaluation against observations shows that the model successfully reproduces the main spatial distribution of cyanobacteria and nitrogen fixation, covering large parts of the tropical and subtropical oceans. Besides the role of cyanobacteria in marine biogeochemical cycles, their capacity to form extensive surface blooms induces a number of bio-physical feedback mechanisms in the Earth system. The processes driving these interactions, which are related to the alteration of heat absorption, surface albedo and momentum input by wind, are incorporated in the biogeochemical and physical model of the MPI-ESM in order to investigate their impacts on a global scale. First preliminary results will be shown.

  5. Parameterized Finite Element Modeling and Buckling Analysis of Six Typical Composite Grid Cylindrical Shells

    Science.gov (United States)

    Lai, Changliang; Wang, Junbiao; Liu, Chuang

    2014-10-01

    Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.

  6. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.; Smith, C.L.; Rasmuson, D.M.

    1994-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  7. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  8. A review of typical thermal fatigue failure models for solder joints of electronic components

    Science.gov (United States)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  9. An extended car-following model considering the acceleration derivative in some typical traffic environments

    Science.gov (United States)

    Zhou, Tong; Chen, Dong; Liu, Weining

    2018-03-01

    Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.

  10. Typical entanglement

    Science.gov (United States)

    Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio

    2013-05-01

    Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.

  11. Incorporating model parameter uncertainty into inverse treatment planning

    International Nuclear Information System (INIS)

    Lian Jun; Xing Lei

    2004-01-01

    Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment

  12. Modeling and simulation of loss of the ultimate heat sink in a typical material testing reactor

    International Nuclear Information System (INIS)

    El-Khatib, Hisham; El-Morshedy, Salah El-Din; Higazy, Maher G.; El-Shazly, Karam

    2013-01-01

    Highlights: ► A thermal–hydraulic model has been developed to simulate loss of the ultimate heat sink in MTR. ► The model involves three coupled sub-models for core, heat exchanger and cooling tower. ► The model is validated against PARET for steady-state and verified by operation data for transients. ► The model is used to simulate the behavior of the reactor under a loss of the ultimate heat sink. ► The model results are analyzed and discussed. -- Abstract: A thermal–hydraulic model has been developed to simulate loss of the ultimate heat sink in a typical material testing reactor (MTR). The model involves three interactively coupled sub-models for reactor core, heat exchanger and cooling tower. The model is validated against PARET code for steady-state operation and verified by the reactor operation records for transients. Then, the model is used to simulate the thermal–hydraulic behavior of the reactor under a loss of the ultimate heat sink event. The simulation is performed for two operation regimes: regime I representing 11 MW power and three cooling tower cells operated, and regime II representing 22 MW power and six cooling tower cells operated. In regime I, the simulation is performed for 1, 2 and 3 cooling tower cells failed while in regime II, it is performed for 1, 2, 3, 4, 5 and 6 cooling tower cells failed. The simulation is performed under protected conditions where the safety action called power reduction is triggered by reactor protection system to decrease the reactor power by 20% when the coolant inlet temperature to the core reaches 43 °C and scram is triggered if the core inlet temperature reaches 44 °C. The model results are analyzed and discussed.

  13. Models for the estimation of diffuse solar radiation for typical cities in Turkey

    International Nuclear Information System (INIS)

    Bakirci, Kadir

    2015-01-01

    In solar energy applications, diffuse solar radiation component is required. Solar radiation data particularly in terms of diffuse component are not readily affordable, because of high price of measurements as well as difficulties in their maintenance and calibration. In this study, new empirical models for predicting the monthly mean diffuse solar radiation on a horizontal surface for typical cities in Turkey are established. Therefore, fifteen empirical models from studies in the literature are used. Also, eighteen diffuse solar radiation models are developed using long term sunshine duration and global solar radiation data. The accuracy of the developed models is evaluated in terms of different statistical indicators. It is found that the best performance is achieved for the third-order polynomial model based on sunshine duration and clearness index. - Highlights: • Diffuse radiation is given as a function of clearness index and sunshine fraction. • The diffuse radiation is an important parameter in solar energy applications. • The diffuse radiation measurement is for limited periods and it is very rare. • The new models can be used to estimate monthly average diffuse solar radiation. • The accuracy of the models is evaluated on the basis of statistical indicators

  14. A mathematical model for incorporating biofeedback into human postural control

    Directory of Open Access Journals (Sweden)

    Ersal Tulga

    2013-02-01

    Full Text Available Abstract Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1 to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2 to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional

  15. A mathematical model for incorporating biofeedback into human postural control

    Science.gov (United States)

    2013-01-01

    Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1) to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2) to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP) to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional torque to the postural

  16. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    Directory of Open Access Journals (Sweden)

    S. Kalenderski

    2013-02-01

    Full Text Available We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ~2.4 Tg day−1 and ~1.5 Tg day−1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground- and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3–4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m−2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  17. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    KAUST Repository

    Kalenderski, Stoitchko

    2013-02-20

    We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ?2.4 Tg day-1 and ?1.5 Tg day-1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground-and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3-4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD) to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m-2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  18. Incorporating modelled subglacial hydrology into inversions for basal drag

    Directory of Open Access Journals (Sweden)

    C. P. Koziol

    2017-12-01

    Full Text Available A key challenge in modelling coupled ice-flow–subglacial hydrology is initializing the state and parameters of the system. We address this problem by presenting a workflow for initializing these values at the start of a summer melt season. The workflow depends on running a subglacial hydrology model for the winter season, when the system is not forced by meltwater inputs, and ice velocities can be assumed constant. Key parameters of the winter run of the subglacial hydrology model are determined from an initial inversion for basal drag using a linear sliding law. The state of the subglacial hydrology model at the end of winter is incorporated into an inversion of basal drag using a non-linear sliding law which is a function of water pressure. We demonstrate this procedure in the Russell Glacier area and compare the output of the linear sliding law with two non-linear sliding laws. Additionally, we compare the modelled winter hydrological state to radar observations and find that it is in line with summer rather than winter observations.

  19. An electricity generation planning model incorporating demand response

    International Nuclear Information System (INIS)

    Choi, Dong Gu; Thomas, Valerie M.

    2012-01-01

    Energy policies that aim to reduce carbon emissions and change the mix of electricity generation sources, such as carbon cap-and-trade systems and renewable electricity standards, can affect not only the source of electricity generation, but also the price of electricity and, consequently, demand. We develop an optimization model to determine the lowest cost investment and operation plan for the generating capacity of an electric power system. The model incorporates demand response to price change. In a case study for a U.S. state, we show the price, demand, and generation mix implications of a renewable electricity standard, and of a carbon cap-and-trade policy with and without initial free allocation of carbon allowances. This study shows that both the demand moderating effects and the generation mix changing effects of the policies can be the sources of carbon emissions reductions, and also shows that the share of the sources could differ with different policy designs. The case study provides different results when demand elasticity is excluded, underscoring the importance of incorporating demand response in the evaluation of electricity generation policies. - Highlights: ► We develop an electric power system optimization model including demand elasticity. ► Both renewable electricity and carbon cap-and-trade policies can moderate demand. ► Both policies affect the generation mix, price, and demand for electricity. ► Moderated demand can be a significant source of carbon emission reduction. ► For cap-and-trade policies, initial free allowances change outcomes significantly.

  20. Tantalum strength model incorporating temperature, strain rate and pressure

    Science.gov (United States)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. Incorporation of chemical kinetic models into process control

    International Nuclear Information System (INIS)

    Herget, C.J.; Frazer, J.W.

    1981-01-01

    An important consideration in chemical process control is to determine the precise rationing of reactant streams, particularly when a large time delay exists between the mixing of the reactants and the measurement of the product. In this paper, a method is described for incorporating chemical kinetic models into the control strategy in order to achieve optimum operating conditions. The system is first characterized by determining a reaction rate surface as a function of all input reactant concentrations over a feasible range. A nonlinear constrained optimization program is then used to determine the combination of reactants which produces the specified yield at minimum cost. This operating condition is then used to establish the nominal concentrations of the reactants. The actual operation is determined through a feedback control system employing a Smith predictor. The method is demonstrated on a laboratory bench scale enzyme reactor

  2. Enhanced air dispersion modelling at a typical Chinese nuclear power plant site: Coupling RIMPUFF with two advanced diagnostic wind models.

    Science.gov (United States)

    Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng

    2017-09-01

    An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    Science.gov (United States)

    James, P.

    2011-12-01

    With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance

  4. Improved Algorithm of SCS-CN Model Parameters in Typical Inland River Basin in Central Asia

    Science.gov (United States)

    Wang, Jin J.; Ding, Jian L.; Zhang, Zhe; Chen, Wen Q.

    2017-02-01

    Rainfall-runoff relationship is the most important factor for hydrological structures, social and economic development on the background of global warmer, especially in arid regions. The aim of this paper is find the suitable method to simulate the runoff in arid area. The Soil Conservation Service Curve Number (SCS-CN) is the most popular and widely applied model for direct runoff estimation. In this paper, we will focus on Wen-quan Basin in source regions of Boertala River. It is a typical valley of inland in Central Asia. First time to use the 16m resolution remote sensing image about high-definition earth observation satellite “Gaofen-1” to provide a high degree accuracy data for land use classification determine the curve number. Use surface temperature/vegetation index (TS/VI) construct 2D scatter plot combine with the soil moisture absorption balance principle calculate the moisture-holding capacity of soil. Using original and parameter algorithm improved SCS-CN model respectively to simulation the runoff. The simulation results show that the improved model is better than original model. Both of them in calibration and validation periods Nash-Sutcliffe efficiency were 0.79, 0.71 and 0.66,038. And relative error were3%, 12% and 17%, 27%. It shows that the simulation accuracy should be further improved and using remote sensing information technology to improve the basic geographic data for the hydrological model has the following advantages: 1) Remote sensing data having a planar characteristic, comprehensive and representative. 2) To get around the bottleneck about lack of data, provide reference to simulation the runoff in similar basin conditions and data-lacking regions.

  5. Digital terrain model generalization incorporating scale, semantic and cognitive constraints

    Science.gov (United States)

    Partsinevelos, Panagiotis; Papadogiorgaki, Maria

    2014-05-01

    Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final

  6. A stochastic MILP energy planning model incorporating power market dynamics

    International Nuclear Information System (INIS)

    Koltsaklis, Nikolaos E.; Nazos, Konstantinos

    2017-01-01

    Highlights: •Stochastic MILP model for the optimal energy planning of a power system. •Power market dynamics (offers/bids) are incorporated in the proposed model. •Monte Carlo method for capturing the uncertainty of some key parameters. •Analytical supply cost composition per power producer and activity. •Clean dark and spark spreads are calculated for each power unit. -- Abstract: This paper presents an optimization-based methodological approach to address the problem of the optimal planning of a power system at an annual level in competitive and uncertain power markets. More specifically, a stochastic mixed integer linear programming model (MILP) has been developed, combining advanced optimization techniques with Monte Carlo method in order to deal with uncertainty issues. The main focus of the proposed framework is the dynamic formulation of the strategy followed by all market participants in volatile market conditions, as well as detailed economic assessment of the power system’s operation. The applicability of the proposed approach has been tested on a real case study of the interconnected Greek power system, quantifying in detail all the relevant technical and economic aspects of the system’s operation. The proposed work identifies in the form of probability distributions the optimal power generation mix, electricity trade at a regional level, carbon footprint, as well as detailed total supply cost composition, according to the assumed market structure. The paper demonstrates that the proposed optimization approach is able to provide important insights into the appropriate energy strategies designed by market participants, as well as on the strategic long-term decisions to be made by investors and/or policy makers at a national and/or regional level, underscoring potential risks and providing appropriate price signals on critical energy projects under real market operating conditions.

  7. Cooperative Problem-Based Learning (CPBL: A Practical PBL Model for a Typical Course

    Directory of Open Access Journals (Sweden)

    Khairiyah Mohd-Yusof

    2011-09-01

    Full Text Available Problem-Based Learning (PBL is an inductive learning approach that uses a realistic problem as the starting point of learning. Unlike in medical education, which is more easily adaptable to PBL, implementing PBL in engineering courses in the traditional semester system set-up is challenging. While PBL is normally implemented in small groups of up to ten students with a dedicated tutor during PBL sessions in medical education, this is not plausible in engineering education because of the high enrolment and large class sizes. In a typical course, implementation of PBL consisting of students in small groups in medium to large classes is more practical. However, this type of implementation is more difficult to monitor, and thus requires good support and guidance in ensuring commitment and accountability of each student towards learning in his/her group. To provide the required support, Cooperative Learning (CL is identified to have the much needed elements to develop the small student groups to functional learning teams. Combining both CL and PBL results in a Cooperative Problem-Based Learning (CPBL model that provides a step by step guide for students to go through the PBL cycle in their teams, according to CL principles. Suitable for implementation in medium to large classes (approximately 40-60 students for one floating facilitator, with small groups consisting of 3-5 students, the CPBL model is designed to develop the students in the whole class into a learning community. This paper provides a detailed description of the CPBL model. A sample implementation in a third year Chemical Engineering course, Process Control and Dynamics, is also described.

  8. Model for Volatile Incorporation into Soils and Dust on Mars

    Science.gov (United States)

    Clark, B. C.; Yen, A.

    2006-12-01

    Martian soils with high content of compounds of sulfur and chlorine are ubiquitous on Mars, having been found at all five landing sites. Sulfate and chloride salts are implicated by a variety of evidence, but few conclusive specific identifications have been made. Discovery of jarosite and Mg-Ca sulfates in outcrops at Meridiani Planum (MER mission) and regional-scale beds of kieserite and gypsum (Mars Express mission) notwithstanding, the sulfates in soils are uncertain. Chlorides or other Cl-containing minerals have not been uniquely identified directly by any method. Viking and Pathfinder missions found trends in the elemental analytical data consistent with MgSO4, but Viking results are biased by duricrust samples and Pathfinder by soil contamination of rock surfaces. The Mars Exploration Rovers (MER) missions have taken extensive data on soils with no confirmation of trends implicating any particular cation. In our model of martian dust and soil, the S and Cl are initially incorporated by condensation or chemisorption on grains directly from gas phase molecules in the atmosphere. It is shown by modeling that the coatings thus formed cannot quantitatively explain the apparent elemental composition of these materials, and therefore involve the migration of ions and formation of microscopic weathering rinds. Original cation inventories of unweathered particles are isochemically conserved. Exposed rock surfaces should also have micro rinds, depending upon the length of time of exposure. Martian soils may therefore have unusual chemical properties when interacting with aqueous layers or infused fluids. Potential ramifications to the quantitative accuracy of x-ray fluorescence and Moessbauer spectroscopy on unprocessed samples are also assessed.

  9. GLOBAL MODELING OF NEBULAE WITH PARTICLE GROWTH, DRIFT, AND EVAPORATION FRONTS. I. METHODOLOGY AND TYPICAL RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Estrada, Paul R. [Carl Sagan Center, SETI Institute, 189 N. Bernardo Avenue # 100, Mountain View, CA 94043 (United States); Cuzzi, Jeffrey N. [Ames Research Center, NASA, Mail Stop 245-3, Moffett Field, CA 94035 (United States); Morgan, Demitri A., E-mail: Paul.R.Estrada@nasa.gov [USRA, NASA Ames Research Center, Mail Stop 245-3, Moffett Field, CA 94035 (United States)

    2016-02-20

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at “evaporation fronts” (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting 2 × 10{sup 5} years. In general, (1) mass is transferred from the outer to the inner nebula in significant amounts, creating radial concentrations of solids at EFs; (2) particle sizes are limited by a combination of fragmentation, bouncing, and drift; (3) “lucky” large particles never represent a significant amount of mass; and (4) restricted radial zones just outside each EF become compositionally enriched in the associated volatiles. We point out implications for millimeter to submillimeter SEDs and the inference of nebula mass, radial banding, the role of opacity on new mechanisms for generating turbulence, the enrichment of meteorites in heavy oxygen isotopes, variable and nonsolar redox conditions, the primary accretion of silicate and icy planetesimals, and the makeup of Jupiter’s core.

  10. GLOBAL MODELING OF NEBULAE WITH PARTICLE GROWTH, DRIFT, AND EVAPORATION FRONTS. I. METHODOLOGY AND TYPICAL RESULTS

    International Nuclear Information System (INIS)

    Estrada, Paul R.; Cuzzi, Jeffrey N.; Morgan, Demitri A.

    2016-01-01

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at “evaporation fronts” (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting 2 × 10 5 years. In general, (1) mass is transferred from the outer to the inner nebula in significant amounts, creating radial concentrations of solids at EFs; (2) particle sizes are limited by a combination of fragmentation, bouncing, and drift; (3) “lucky” large particles never represent a significant amount of mass; and (4) restricted radial zones just outside each EF become compositionally enriched in the associated volatiles. We point out implications for millimeter to submillimeter SEDs and the inference of nebula mass, radial banding, the role of opacity on new mechanisms for generating turbulence, the enrichment of meteorites in heavy oxygen isotopes, variable and nonsolar redox conditions, the primary accretion of silicate and icy planetesimals, and the makeup of Jupiter’s core

  11. Dynamic assessment of nonlinear typical section aeroviscoelastic systems using fractional derivative-based viscoelastic model

    Science.gov (United States)

    Sales, T. P.; Marques, Flávio D.; Pereira, Daniel A.; Rade, Domingos A.

    2018-06-01

    Nonlinear aeroelastic systems are prone to the appearance of limit cycle oscillations, bifurcations, and chaos. Such problems are of increasing concern in aircraft design since there is the need to control nonlinear instabilities and improve safety margins, at the same time as aircraft are subjected to increasingly critical operational conditions. On the other hand, in spite of the fact that viscoelastic materials have already been successfully used for the attenuation of undesired vibrations in several types of mechanical systems, a small number of research works have addressed the feasibility of exploring the viscoelastic effect to improve the behavior of nonlinear aeroelastic systems. In this context, the objective of this work is to assess the influence of viscoelastic materials on the aeroelastic features of a three-degrees-of-freedom typical section with hardening structural nonlinearities. The equations of motion are derived accounting for the presence of viscoelastic materials introduced in the resilient elements associated to each degree-of-freedom. A constitutive law based on fractional derivatives is adopted, which allows the modeling of temperature-dependent viscoelastic behavior in time and frequency domains. The unsteady aerodynamic loading is calculated based on the classical linear potential theory for arbitrary airfoil motion. The aeroelastic behavior is investigated through time domain simulations, and subsequent frequency transformations, from which bifurcations are identified from diagrams of limit cycle oscillations amplitudes versus airspeed. The influence of the viscoelastic effect on the aeroelastic behavior, for different values of temperature, is also investigated. The numerical simulations show that viscoelastic damping can increase the flutter speed and reduce the amplitudes of limit cycle oscillations. These results prove the potential that viscoelastic materials have to increase aircraft components safety margins regarding aeroelastic

  12. Computational fluid dynamics modeling of rope-guided conveyances in two typical kinds of shaft layouts.

    Directory of Open Access Journals (Sweden)

    Renyuan Wu

    Full Text Available The behavior of rope-guided conveyances is so complicated that the rope-guided hoisting system hasn't been understood thoroughly so far. In this paper, with user-defined functions loaded, ANSYS FLUENT 14.5 was employed to simulate lateral motion of rope-guided conveyances in two typical kinds of shaft layouts. With rope-guided mine elevator and mine cages taken into account, results show that the lateral aerodynamic buffeting force is much larger than the Coriolis force, and the side aerodynamic force have the same order of magnitude as the Coriolis force. The lateral aerodynamic buffeting forces should also be considered especially when the conveyance moves along the ventilation air direction. The simulation shows that the closer size of the conveyances can weaken the transverse aerodynamic buffeting effect.

  13. 75 FR 20265 - Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes

    Science.gov (United States)

    2010-04-19

    ... Office, 1701 Columbia Avenue, College Park, Georgia 30337; telephone: (404) 474-5524; facsimile: (404... Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes AGENCY: Federal Aviation...-08- 05, which applies to certain Liberty Aerospace Incorporated Model XL-2 airplanes. AD 2009-08-05...

  14. Loss given default models incorporating macroeconomic variables for credit cards

    OpenAIRE

    Crook, J.; Bellotti, T.

    2012-01-01

    Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...

  15. A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London

    OpenAIRE

    Croll, Grenville J.

    2007-01-01

    Spreadsheet audit and review procedures are an essential part of almost all City of London financial transactions. Structured processes are used to discover errors in large financial spreadsheets underpinning major transactions of all types. Serious errors are routinely found and are fed back to model development teams generally under conditions of extreme time urgency. Corrected models form the essence of the completed transaction and firms undertaking model audit and review expose themselve...

  16. Looking around houses: attention to a model when drawing complex shapes in Williams syndrome and typical development.

    Science.gov (United States)

    Hudson, Kerry D; Farran, Emily K

    2013-09-01

    Drawings by individuals with Williams syndrome (WS) typically lack cohesion. The popular hypothesis is that this is a result of excessive focus on local-level detail at the expense of global configuration. In this study, we explored a novel hypothesis that inadequate attention might underpin drawing in WS. WS and typically developing (TD) non-verbal ability matched groups copied and traced a house figure comprised of geometric shapes. The house was presented on a computer screen for 5-s periods and participants pressed a key to re-view the model. Frequency of key-presses indexed the looks to the model. The order that elements were replicated was recorded to assess hierarchisation of elements. If a lack of attention to the model explained poor drawing performance, we expected participants with WS to look less frequently to the model than TD children when copying. If a local-processing preference underpins drawing in WS, more local than global elements would be produced. Results supported the first, but not second hypothesis. The WS group looked to the model infrequently, but global, not local, parts were drawn first, scaffolding local-level details. Both groups adopted a similar order of drawing and tracing of parts, suggesting typical, although delayed strategy-use in the WS group. Additionally both groups drew larger elements of the model before smaller elements, suggested a size-bias when drawing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    Science.gov (United States)

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  18. Thermohidraulic model for a typical steam generator of PWR Nuclear Power Plants

    International Nuclear Information System (INIS)

    Braga, C.V.M.

    1980-06-01

    A model of thermohidraulic simulation, for steady state, considering the secondary flow divided in two parts individually homogeneous, and with heat and mass transferences between them is developed. The quality of the two-phase mixture that is fed to the turbine is fixed and, based on this value, the feedwater pressure is determined. The recirculation ratio is intrinsically determined. Based on this model it was developed the GEVAP code, in Fortran-IV language. The model is applied to the steam generator of the Angra II nuclear power plant and the results are compared with KWU'S design parameters, being considered satisfactory. (Author) [pt

  19. Incorporating Contagion in Portfolio Credit Risk Models Using Network Theory

    NARCIS (Netherlands)

    Anagnostou, I.; Sourabh, S.; Kandhai, D.

    2018-01-01

    Portfolio credit risk models estimate the range of potential losses due to defaults or deteriorations in credit quality. Most of these models perceive default correlation as fully captured by the dependence on a set of common underlying risk factors. In light of empirical evidence, the ability of

  20. Incorporating measurement error in n = 1 psychological autoregressive modeling

    Science.gov (United States)

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  1. A statistical model for aggregating judgments by incorporating peer predictions

    OpenAIRE

    McCoy, John; Prelec, Drazen

    2017-01-01

    We propose a probabilistic model to aggregate the answers of respondents answering multiple-choice questions. The model does not assume that everyone has access to the same information, and so does not assume that the consensus answer is correct. Instead, it infers the most probable world state, even if only a minority vote for it. Each respondent is modeled as receiving a signal contingent on the actual world state, and as using this signal to both determine their own answer and predict the ...

  2. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  3. Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling

    Directory of Open Access Journals (Sweden)

    Dennis Fok

    2014-02-01

    Full Text Available We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.

  4. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    KAUST Repository

    Kalenderski, Stoitchko; Stenchikov, Georgiy L.; Zhao, C.

    2013-01-01

    2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ?2.4 Tg day-1 and ?1.5 Tg day-1, corresponding to two periods with the highest aerosol optical

  5. A Structural Equation Model of the Writing Process in Typically-Developing Sixth Grade Children

    Science.gov (United States)

    Koutsoftas, Anthony D.; Gray, Shelley

    2013-01-01

    The purpose of this study was to evaluate how sixth grade children planned, translated, and revised written narrative stories using a task reflecting current instructional and assessment practices. A modified version of the Hayes and Flower (1980) writing process model was used as the theoretical framework for the study. Two hundred one…

  6. Numerical modeling for longwall pillar design: a case study from a typical longwall panel in China

    Science.gov (United States)

    Zhang, Guangchao; Liang, Saijiang; Tan, Yunliang; Xie, Fuxing; Chen, Shaojie; Jia, Hongguo

    2018-02-01

    This paper presents a new numerical modeling procedure and design principle for longwall pillar design with the assistance of numerical simulation of FLAC3D. A coal mine located in Yanzhou city, Shandong Province, China, was selected for this case study. A meticulously validated numerical model was developed to investigate the stress changes across the longwall pillar with various sizes. In order to improve the reliability of the numerical modeling, a calibration procedure is undertaken to match the Salamon and Munro pillar strength formula for the coal pillar, while a similar calibration procedure is used to estimate the stress-strain response of a gob. The model results demonstrated that when the coal pillar width was 7-8 m, most of the vertical load was carried by the panel rib, whilst the gateroad was overall in a relatively low stress environment and could keep its stability with proper supports. Thus, the rational longwall pillar width was set as 8 m and the field monitoring results confirmed the feasibility of this pillar size. The proposed numerical simulation procedure and design principle presented in this study could be a viable alternative approach for longwall pillar design for other similar projects.

  7. A multi-period, multi-regional generation expansion planning model incorporating unit commitment constraints

    International Nuclear Information System (INIS)

    Koltsaklis, Nikolaos E.; Georgiadis, Michael C.

    2015-01-01

    Highlights: • A short-term structured investment planning model has been developed. • Unit commitment problem is incorporated into the long-term planning horizon. • Inherent intermittency of renewables is modelled in a comprehensive way. • The impact of CO_2 emission pricing in long-term investment decisions is quantified. • The evolution of system’s marginal price is evaluated for all the planning horizon. - Abstract: This work presents a generic mixed integer linear programming (MILP) model that integrates the unit commitment problem (UCP), i.e., daily energy planning with the long-term generation expansion planning (GEP) framework. Typical daily constraints at an hourly level such as start-up and shut-down related decisions (start-up type, minimum up and down time, synchronization, soak and desynchronization time constraints), ramping limits, system reserve requirements are combined with representative yearly constraints such as power capacity additions, power generation bounds of each unit, peak reserve requirements, and energy policy issues (renewables penetration limits, CO_2 emissions cap and pricing). For modelling purposes, a representative day (24 h) of each month over a number of years has been employed in order to determine the optimal capacity additions, electricity market clearing prices, and daily operational planning of the studied power system. The model has been tested on an illustrative case study of the Greek power system. Our approach aims to provide useful insight into strategic and challenging decisions to be determined by investors and/or policy makers at a national and/or regional level by providing the optimal energy roadmap under real operating and design constraints.

  8. Modeling returns volatility: Realized GARCH incorporating realized risk measure

    Science.gov (United States)

    Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye

    2018-06-01

    This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.

  9. Incorporating pushing in exclusion-process models of cell migration.

    Science.gov (United States)

    Yates, Christian A; Parker, Andrew; Baker, Ruth E

    2015-05-01

    The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.

  10. Incorporating spiritual beliefs into a cognitive model of worry.

    Science.gov (United States)

    Rosmarin, David H; Pirutinsky, Steven; Auerbach, Randy P; Björgvinsson, Thröstur; Bigda-Peyton, Joseph; Andersson, Gerhard; Pargament, Kenneth I; Krumrei, Elizabeth J

    2011-07-01

    Cognitive theory and research have traditionally highlighted the relevance of the core beliefs about oneself, the world, and the future to human emotions. For some individuals, however, core beliefs may also explicitly involve spiritual themes. In this article, we propose a cognitive model of worry, in which positive/negative beliefs about the Divine affect symptoms through the mechanism of intolerance of uncertainty. Using mediation analyses, we found support for our model across two studies, in particular, with regards to negative spiritual beliefs. These findings highlight the importance of assessing for spiritual alongside secular convictions when creating cognitive-behavioral case formulations in the treatment of religious individuals. © 2011 Wiley Periodicals, Inc.

  11. Modelling toluene oxidation : Incorporation of mass transfer phenomena

    NARCIS (Netherlands)

    Hoorn, J.A.A.; van Soolingen, J.; Versteeg, G. F.

    The kinetics of the oxidation of toluene have been studied in close interaction with the gas-liquid mass transfer occurring in the reactor. Kinetic parameters for a simple model have been estimated on basis of experimental observations performed under industrial conditions. The conclusions for the

  12. Incorporating pion effects into the naive quark model

    International Nuclear Information System (INIS)

    Nogami, Y.; Ohtuska, N.

    1982-01-01

    A hybrid of the naive nonrelativistic quark model and the Chew-Low model is proposed. The pion is treated as an elementary particle which interacts with the ''bare baryon'' or ''baryon core'' via the Chew-Low interaction. The baryon core, which is the source of the pion interaction, is described by the naive nonrelativistic quark model. It turns out that the baryon-core radius has to be as large as 0.8 fm, and consequently the cutoff momentum Λ for the pion interaction is < or approx. =3m/sub π/, m/sub π/ being the pion mass. Because of this small Λ (as compared with Λapprox. nucleon mass in the old Chew-Low model) the effects of the pion cloud are strongly suppressed. The baryon masses, baryon magnetic moments, and the nucleon charge radii can be reproduced quite well. However, we found it singularly difficult to fit the axial-vector weak decay constant g/sub A/

  13. Do Knowledge-Component Models Need to Incorporate Representational Competencies?

    Science.gov (United States)

    Rau, Martina Angela

    2017-01-01

    Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…

  14. Making Invasion models useful for decision makers; incorporating uncertainty, knowledge gaps, and decision-making preferences

    Science.gov (United States)

    Denys Yemshanov; Frank H Koch; Mark Ducey

    2015-01-01

    Uncertainty is inherent in model-based forecasts of ecological invasions. In this chapter, we explore how the perceptions of that uncertainty can be incorporated into the pest risk assessment process. Uncertainty changes a decision maker’s perceptions of risk; therefore, the direct incorporation of uncertainty may provide a more appropriate depiction of risk. Our...

  15. Workforce scheduling: A new model incorporating human factors

    Directory of Open Access Journals (Sweden)

    Mohammed Othman

    2012-12-01

    Full Text Available Purpose: The majority of a company’s improvement comes when the right workers with the right skills, behaviors and capacities are deployed appropriately throughout a company. This paper considers a workforce scheduling model including human aspects such as skills, training, workers’ personalities, workers’ breaks and workers’ fatigue and recovery levels. This model helps to minimize the hiring, firing, training and overtime costs, minimize the number of fired workers with high performance, minimize the break time and minimize the average worker’s fatigue level.Design/methodology/approach: To achieve this objective, a multi objective mixed integer programming model is developed to determine the amount of hiring, firing, training and overtime for each worker type.Findings: The results indicate that the worker differences should be considered in workforce scheduling to generate realistic plans with minimum costs. This paper also investigates the effects of human fatigue and recovery on the performance of the production systems.Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the model such as the assumption of certainty of the demand in each period, and the linearity function of Fatigue accumulation and recovery curves. These assumptions can be relaxed in future work.Originality/value: In this research, a new model for integrating workers’ differences with workforce scheduling is proposed. To the authors' knowledge, it is the first time to study the effects of different important human factors such as human personality, skills and fatigue and recovery in the workforce scheduling process. This research shows that considering both technical and human factors together can reduce the costs in manufacturing systems and ensure the safety of the workers.

  16. Incorporating grassland management in a global vegetation model

    Science.gov (United States)

    Chang, Jinfeng; Viovy, Nicolas; Vuichard, Nicolas; Ciais, Philippe; Wang, Tao; Cozic, Anne; Lardy, Romain; Graux, Anne-Isabelle; Klumpp, Katja; Martin, Raphael; Soussana, Jean-François

    2013-04-01

    Grassland is a widespread vegetation type, covering nearly one-fifth of the world's land surface (24 million km2), and playing a significant role in the global carbon (C) cycle. Most of grasslands in Europe are cultivated to feed animals, either directly by grazing or indirectly by grass harvest (cutting). A better understanding of the C fluxes from grassland ecosystems in response to climate and management requires not only field experiments but also the aid of simulation models. ORCHIDEE process-based ecosystem model designed for large-scale applications treats grasslands as being unmanaged, where C / water fluxes are only subject to atmospheric CO2 and climate changes. Our study describes how management of grasslands is included in the ORCHIDEE, and how management affects modeled grassland-atmosphere CO2 fluxes. The new model, ORCHIDEE-GM (Grassland Management) is capable with a management module inspired from a grassland model (PaSim, version 5.0), of accounting for two grassland management practices (cutting and grazing). The evaluation of the results of ORCHIDEE-GM compared with those of ORCHIDEE at 11 European sites equipped with eddy covariance and biometric measurements, show that ORCHIDEE-GM can capture realistically the cut-induced seasonal variation in biometric variables (LAI: Leaf Area Index; AGB: Aboveground Biomass) and in CO2 fluxes (GPP: Gross Primary Productivity; TER: Total Ecosystem Respiration; and NEE: Net Ecosystem Exchange). But improvements at grazing sites are only marginal in ORCHIDEE-GM, which relates to the difficulty in accounting for continuous grazing disturbance and its induced complex animal-vegetation interactions. Both NEE and GPP on monthly to annual timescales can be better simulated in ORCHIDEE-GM than in ORCHIDEE without management. At some sites, the model-observation misfit in ORCHIDEE-GM is found to be more related to ill-constrained parameter values than to model structure. Additionally, ORCHIDEE-GM is able to simulate

  17. Incorporating Satellite Time-Series Data into Modeling

    Science.gov (United States)

    Gregg, Watson

    2008-01-01

    In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.

  18. Incorporating Contagion in Portfolio Credit Risk Models Using Network Theory

    Directory of Open Access Journals (Sweden)

    Ioannis Anagnostou

    2018-01-01

    Full Text Available Portfolio credit risk models estimate the range of potential losses due to defaults or deteriorations in credit quality. Most of these models perceive default correlation as fully captured by the dependence on a set of common underlying risk factors. In light of empirical evidence, the ability of such a conditional independence framework to accommodate for the occasional default clustering has been questioned repeatedly. Thus, financial institutions have relied on stressed correlations or alternative copulas with more extreme tail dependence. In this paper, we propose a different remedy—augmenting systematic risk factors with a contagious default mechanism which affects the entire universe of credits. We construct credit stress propagation networks and calibrate contagion parameters for infectious defaults. The resulting framework is implemented on synthetic test portfolios wherein the contagion effect is shown to have a significant impact on the tails of the loss distributions.

  19. Incorporation of intraocular scattering in schematic eye models

    International Nuclear Information System (INIS)

    Navarro, R.

    1985-01-01

    Beckmann's theory of scattering from rough surfaces is applied to obtain, from the experimental veiling glare functions, a diffuser that when placed at the pupil plane would produce the same scattering halo as the ocular media. This equivalent diffuser is introduced in a schematic eye model, and its influence on the point-spread function and the modulation-transfer function of the eye is analyzed

  20. Constitutive modeling of coronary artery bypass graft with incorporated torsion

    Czech Academy of Sciences Publication Activity Database

    Horný, L.; Chlup, Hynek; Žitný, R.; Adámek, T.

    2009-01-01

    Roč. 49, č. 2 (2009), s. 273-277 ISSN 0543-5846 R&D Projects: GA ČR(CZ) GA106/08/0557 Institutional research plan: CEZ:AV0Z20760514 Keywords : coronary artery bypass graft * constitutive model * digital image correlation Subject RIV: BJ - Thermodynamics Impact factor: 0.439, year: 2009 http://web.tuke.sk/sjf-kamam/mmams2009/contents.pdf

  1. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    OpenAIRE

    Naikwad, S. N.; Dudul, S. V.

    2009-01-01

    A focused time lagged recurrent neural network (FTLR NN) with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes tempora...

  2. Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling

    Science.gov (United States)

    Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom

    2018-03-01

    Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.

  3. Magnesium degradation influenced by buffering salts in concentrations typical of in vitro and in vivo models

    International Nuclear Information System (INIS)

    Agha, Nezha Ahmad; Feyerabend, Frank; Mihailova, Boriana; Heidrich, Stefanie; Bismayer, Ulrich; Willumeit-Römer, Regine

    2016-01-01

    Magnesium and its alloys have considerable potential for orthopedic applications. During the degradation process the interface between material and tissue is continuously changing. Moreover, too fast or uncontrolled degradation is detrimental for the outcome in vivo. Therefore in vitro setups utilizing physiological conditions are promising for the material/degradation analysis prior to animal experiments. The aim of this study is to elucidate the influence of inorganic salts contributing to the blood buffering capacity on degradation. Extruded pure magnesium samples were immersed under cell culture conditions for 3 and 10 days. Hank's balanced salt solution without calcium and magnesium (HBSS) plus 10% of fetal bovine serum (FBS) was used as the basic immersion medium. Additionally, different inorganic salts were added with respect to concentration in Dulbecco's modified Eagle's medium (DMEM, in vitro model) and human plasma (in vivo model) to form 12 different immersion media. Influences on the surrounding environment were observed by measuring pH and osmolality. The degradation interface was analyzed by electron-induced X-ray emission (EIXE) spectroscopy, including chemical-element mappings and electron microprobe analysis, as well as Fourier transform infrared reflection micro-spectroscopy (FTIR). - Highlights: • Influence of blood buffering salts on magnesium degradation was studied. • CaCl_2 reduced the degradation rate by Ca–PO_4 layer formation. • MgSO_4 influenced the morphology of the degradation interface. • NaHCO_3 induced the formation of MgCO_3 as a degradation product

  4. Magnesium degradation influenced by buffering salts in concentrations typical of in vitro and in vivo models.

    Science.gov (United States)

    Agha, Nezha Ahmad; Feyerabend, Frank; Mihailova, Boriana; Heidrich, Stefanie; Bismayer, Ulrich; Willumeit-Römer, Regine

    2016-01-01

    Magnesium and its alloys have considerable potential for orthopedic applications. During the degradation process the interface between material and tissue is continuously changing. Moreover, too fast or uncontrolled degradation is detrimental for the outcome in vivo. Therefore in vitro setups utilizing physiological conditions are promising for the material/degradation analysis prior to animal experiments. The aim of this study is to elucidate the influence of inorganic salts contributing to the blood buffering capacity on degradation. Extruded pure magnesium samples were immersed under cell culture conditions for 3 and 10 days. Hank's balanced salt solution without calcium and magnesium (HBSS) plus 10% of fetal bovine serum (FBS) was used as the basic immersion medium. Additionally, different inorganic salts were added with respect to concentration in Dulbecco's modified Eagle's medium (DMEM, in vitro model) and human plasma (in vivo model) to form 12 different immersion media. Influences on the surrounding environment were observed by measuring pH and osmolality. The degradation interface was analyzed by electron-induced X-ray emission (EIXE) spectroscopy, including chemical-element mappings and electron microprobe analysis, as well as Fourier transform infrared reflection micro-spectroscopy (FTIR). Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Exergoeconomic performance optimization for a steady-flow endoreversible refrigeration model including six typical cycles

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Lingen; Kan, Xuxian; Sun, Fengrui; Wu, Feng [College of Naval Architecture and Power, Naval University of Engineering, Wuhan 430033 (China)

    2013-07-01

    The operation of a universal steady flow endoreversible refrigeration cycle model consisting of a constant thermal-capacity heating branch, two constant thermal-capacity cooling branches and two adiabatic branches is viewed as a production process with exergy as its output. The finite time exergoeconomic performance optimization of the refrigeration cycle is investigated by taking profit rate optimization criterion as the objective. The relations between the profit rate and the temperature ratio of working fluid, between the COP (coefficient of performance) and the temperature ratio of working fluid, as well as the optimal relation between profit rate and the COP of the cycle are derived. The focus of this paper is to search the compromised optimization between economics (profit rate) and the utilization factor (COP) for endoreversible refrigeration cycles, by searching the optimum COP at maximum profit, which is termed as the finite-time exergoeconomic performance bound. Moreover, performance analysis and optimization of the model are carried out in order to investigate the effect of cycle process on the performance of the cycles using numerical example. The results obtained herein include the performance characteristics of endoreversible Carnot, Diesel, Otto, Atkinson, Dual and Brayton refrigeration cycles.

  6. Accuracy of typical photogrammetric networks in cultural heritage 3D modeling projects

    Directory of Open Access Journals (Sweden)

    E. Nocerino

    2014-06-01

    Full Text Available The easy generation of 3D geometries (point clouds or polygonal models with fully automated image-based methods poses nontrivial problems on how to check a posteriori the quality of the achieved results. Clear statements and procedures on how to plan the camera network, execute the survey and use automatic tools to achieve the prefixed requirements are still an open issue. Although such issues had been discussed and solved some years ago, the importance of camera network geometry is today often underestimated or neglected in the cultural heritage field. In this paper different camera network geometries, with normal and convergent images, are analyzed and the accuracy of the produced results are compared to ground truth measurements.

  7. Magnesium degradation influenced by buffering salts in concentrations typical of in vitro and in vivo models

    Energy Technology Data Exchange (ETDEWEB)

    Agha, Nezha Ahmad; Feyerabend, Frank [Helmholtz-Zentrum Geesthacht, Institute of Material Research, Division of Metallic Biomaterials, Max-Planck-Str. 1, 21502 Geesthacht (Germany); Mihailova, Boriana; Heidrich, Stefanie; Bismayer, Ulrich [University of Hamburg, Department of Earth Sciences, Grindelallee 48, 20146 Hamburg (Germany); Willumeit-Römer, Regine [Helmholtz-Zentrum Geesthacht, Institute of Material Research, Division of Metallic Biomaterials, Max-Planck-Str. 1, 21502 Geesthacht (Germany)

    2016-01-01

    Magnesium and its alloys have considerable potential for orthopedic applications. During the degradation process the interface between material and tissue is continuously changing. Moreover, too fast or uncontrolled degradation is detrimental for the outcome in vivo. Therefore in vitro setups utilizing physiological conditions are promising for the material/degradation analysis prior to animal experiments. The aim of this study is to elucidate the influence of inorganic salts contributing to the blood buffering capacity on degradation. Extruded pure magnesium samples were immersed under cell culture conditions for 3 and 10 days. Hank's balanced salt solution without calcium and magnesium (HBSS) plus 10% of fetal bovine serum (FBS) was used as the basic immersion medium. Additionally, different inorganic salts were added with respect to concentration in Dulbecco's modified Eagle's medium (DMEM, in vitro model) and human plasma (in vivo model) to form 12 different immersion media. Influences on the surrounding environment were observed by measuring pH and osmolality. The degradation interface was analyzed by electron-induced X-ray emission (EIXE) spectroscopy, including chemical-element mappings and electron microprobe analysis, as well as Fourier transform infrared reflection micro-spectroscopy (FTIR). - Highlights: • Influence of blood buffering salts on magnesium degradation was studied. • CaCl{sub 2} reduced the degradation rate by Ca–PO{sub 4} layer formation. • MgSO{sub 4} influenced the morphology of the degradation interface. • NaHCO{sub 3} induced the formation of MgCO{sub 3} as a degradation product.

  8. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  9. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  10. A neuro-fuzzy model for prediction of the indoor temperature in typical Australian residential buildings

    Energy Technology Data Exchange (ETDEWEB)

    Alasha' ary, Haitham; Moghtaderi, Behdad; Page, Adrian; Sugo, Heber [Priority Research Centre for Energy, Chemical Engineering, School of Engineering, Faculty of Engineering and Built Environment, the University of Newcastle, Callaghan, Newcastle, NSW 2308 (Australia)

    2009-07-15

    The Masonry Research Group at The University of Newcastle, Australia has embarked on an extensive research program to study the thermal performance of common walling systems in Australian residential buildings by studying the thermal behaviour of four representative purpose-built thermal test buildings (referred to as 'test modules' or simply 'modules' hereafter). The modules are situated on the university campus and are constructed from brick veneer (BV), cavity brick (CB) and lightweight (LW) constructions. The program of study has both experimental and analytical strands, including the use of a neuro-fuzzy approach to predict the thermal behaviour. The latter approach employs an experimental adaptive neuro-fuzzy inference system (ANFIS) which is used in this study to predict the room (indoor) temperatures of the modules under a range of climatic conditions pertinent to Newcastle (NSW, Australia). The study shows that this neuro-fuzzy model is capable of accurately predicting the room temperature of such buildings; thus providing a potential computationally efficient and inexpensive predictive tool for the more effective thermal design of housing. (author)

  11. Simplified CFD model of coolant channels typical of a plate-type fuel element: an exhaustive verification of the simulations

    Energy Technology Data Exchange (ETDEWEB)

    Mantecón, Javier González; Mattar Neto, Miguel, E-mail: javier.mantecon@ipen.br, E-mail: mmattar@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The use of parallel plate-type fuel assemblies is common in nuclear research reactors. One of the main problems of this fuel element configuration is the hydraulic instability of the plates caused by the high flow velocities. The current work is focused on the hydrodynamic characterization of coolant channels typical of a flat-plate fuel element, using a numerical model developed with the commercial code ANSYS CFX. Numerical results are compared to accurate analytical solutions, considering two turbulence models and three different fluid meshes. For this study, the results demonstrated that the most suitable turbulence model is the k-ε model. The discretization error is estimated using the Grid Convergence Index method. Despite its simplicity, this model generates precise flow predictions. (author)

  12. Simplified CFD model of coolant channels typical of a plate-type fuel element: an exhaustive verification of the simulations

    International Nuclear Information System (INIS)

    Mantecón, Javier González; Mattar Neto, Miguel

    2017-01-01

    The use of parallel plate-type fuel assemblies is common in nuclear research reactors. One of the main problems of this fuel element configuration is the hydraulic instability of the plates caused by the high flow velocities. The current work is focused on the hydrodynamic characterization of coolant channels typical of a flat-plate fuel element, using a numerical model developed with the commercial code ANSYS CFX. Numerical results are compared to accurate analytical solutions, considering two turbulence models and three different fluid meshes. For this study, the results demonstrated that the most suitable turbulence model is the k-ε model. The discretization error is estimated using the Grid Convergence Index method. Despite its simplicity, this model generates precise flow predictions. (author)

  13. Incorporation particle creation and annihilation into Bohm's Pilot Wave model

    Energy Technology Data Exchange (ETDEWEB)

    Sverdlov, Roman [Raman Research Institute, C.V. Raman Avenue, Sadashiva Nagar, Bangalore, Karnataka, 560080 (India)

    2011-07-08

    The purpose of this paper is to come up with a Pilot Wave model of quantum field theory that incorporates particle creation and annihilation without sacrificing determinism; this theory is subsequently coupled with gravity.

  14. INCORPORATION OF MECHANISTIC INFORMATION IN THE ARSENIC PBPK MODEL DEVELOPMENT PROCESS

    Science.gov (United States)

    INCORPORATING MECHANISTIC INSIGHTS IN A PBPK MODEL FOR ARSENICElaina M. Kenyon, Michael F. Hughes, Marina V. Evans, David J. Thomas, U.S. EPA; Miroslav Styblo, University of North Carolina; Michael Easterling, Analytical Sciences, Inc.A physiologically based phar...

  15. High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture

    Science.gov (United States)

    2017-11-22

    High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating

  16. Incorporating the life course model into MCH nutrition leadership education and training programs.

    Science.gov (United States)

    Haughton, Betsy; Eppig, Kristen; Looney, Shannon M; Cunningham-Sabo, Leslie; Spear, Bonnie A; Spence, Marsha; Stang, Jamie S

    2013-01-01

    Life course perspective, social determinants of health, and health equity have been combined into one comprehensive model, the life course model (LCM), for strategic planning by US Health Resources and Services Administration's Maternal and Child Health Bureau. The purpose of this project was to describe a faculty development process; identify strategies for incorporation of the LCM into nutrition leadership education and training at the graduate and professional levels; and suggest broader implications for training, research, and practice. Nineteen representatives from 6 MCHB-funded nutrition leadership education and training programs and 10 federal partners participated in a one-day session that began with an overview of the models and concluded with guided small group discussions on how to incorporate them into maternal and child health (MCH) leadership training using obesity as an example. Written notes from group discussions were compiled and coded emergently. Content analysis determined the most salient themes about incorporating the models into training. Four major LCM-related themes emerged, three of which were about training: (1) incorporation by training grants through LCM-framed coursework and experiences for trainees, and similarly framed continuing education and skills development for professionals; (2) incorporation through collaboration with other training programs and state and community partners, and through advocacy; and (3) incorporation by others at the federal and local levels through policy, political, and prevention efforts. The fourth theme focused on anticipated challenges of incorporating the model in training. Multiple methods for incorporating the LCM into MCH training and practice are warranted. Challenges to incorporating include the need for research and related policy development.

  17. The effects of typical and atypical antipsychotics on the electrical activity of the brain in a rat model

    Directory of Open Access Journals (Sweden)

    Oytun Erbaş

    2013-09-01

    Full Text Available Objective: Antipsychotic drugs are known to have strongeffect on the bioelectric activity in the brain. However,some studies addressing the changes on electroencephalography(EEG caused by typical and atypical antipsychoticdrugs are conflicting. We aimed to compare the effectsof typical and atypical antipsychotics on the electricalactivity in the brain via EEG recordings in a rat model.Methods: Thirty-two Sprague Dawley adult male ratswere used in the study. The rats were divided into fivegroups, randomly (n=7, for each group. The first groupwas used as control group and administered 1 ml/kg salineintraperitoneally (IP. Haloperidol (1 mg/kg (group 2,chlorpromazine (5 mg/kg (group 3, olanzapine (1 mg/kg(group 4, ziprasidone (1 mg/ kg (group 5 were injectedIP for five consecutive days. Then, EEG recordings ofeach group were taken for 30 minutes.Results: The percentages of delta and theta waves inhaloperidol, chlorpromazine, olanzapine and ziprasidonegroups were found to have a highly significant differencecompared with the saline administration group (p<0.001.The theta waves in the olanzapine and ziprasidonegroups were increased compared with haloperidol andchlorpromazine groups (p<0.05.Conclusion: The typical and atypical antipsychotic drugsmay be risk factor for EEG abnormalities. This studyshows that antipsychotic drugs should be used with caution.J Clin Exp Invest 2013; 4 (3: 279-284Key words: Haloperidol, chlorpromazine, olanzapine,ziprasidone, EEG, rat

  18. A mathematical model for the performance assessment of engineering barriers of a typical near surface radioactive waste disposal facility

    Energy Technology Data Exchange (ETDEWEB)

    Antonio, Raphaela N.; Rotunno Filho, Otto C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Hidrologia e Estudos do Meio Ambiente]. E-mail: otto@hidro.ufrj.br; Ruperti Junior, Nerbe J.; Lavalle Filho, Paulo F. Heilbron [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)]. E-mail: nruperti@cnen.gov.br

    2005-07-01

    This work proposes a mathematical model for the performance assessment of a typical radioactive waste disposal facility based on the consideration of a multiple barrier concept. The Generalized Integral Transform Technique is employed to solve the Advection-Dispersion mass transfer equation under the assumption of saturated one-dimensional flow, to obtain solute concentrations at given times and locations within the medium. A test-case is chosen in order to illustrate the performance assessment of several configurations of a multi barrier system adopted for the containment of sand contaminated with Ra-226 within a trench. (author)

  19. A mathematical model for the performance assessment of engineering barriers of a typical near surface radioactive waste disposal facility

    International Nuclear Information System (INIS)

    Antonio, Raphaela N.; Rotunno Filho, Otto C.

    2005-01-01

    This work proposes a mathematical model for the performance assessment of a typical radioactive waste disposal facility based on the consideration of a multiple barrier concept. The Generalized Integral Transform Technique is employed to solve the Advection-Dispersion mass transfer equation under the assumption of saturated one-dimensional flow, to obtain solute concentrations at given times and locations within the medium. A test-case is chosen in order to illustrate the performance assessment of several configurations of a multi barrier system adopted for the containment of sand contaminated with Ra-226 within a trench. (author)

  20. Developing Baltic cod recruitment models II : Incorporation of environmental variability and species interaction

    DEFF Research Database (Denmark)

    Köster, Fritz; Hinrichsen, H.H.; St. John, Michael

    2001-01-01

    We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... cod in these areas, suggesting that key biotic and abiotic processes can be successfully incorporated into recruitment models....... survival of early life stages and varying systematically among spawning sites were incorporated into stock-recruitment models, first for major cod spawning sites and then combined for the entire Central Baltic. Variables identified included potential egg production by the spawning stock, abiotic conditions...

  1. Modeling individual differences in text reading fluency: a different pattern of predictors for typically developing and dyslexic readers

    Directory of Open Access Journals (Sweden)

    Pierluigi eZoccolotti

    2014-11-01

    Full Text Available This study was aimed at predicting individual differences in text reading fluency. The basic proposal included two factors, i.e., the ability to decode letter strings (measured by discrete pseudo-word reading and integration of the various sub-components involved in reading (measured by Rapid Automatized Naming, RAN. Subsequently, a third factor was added to the model, i.e., naming of discrete digits. In order to use homogeneous measures, all contributing variables considered the entire processing of the item, including pronunciation time. The model, which was based on commonality analysis, was applied to data from a group of 43 typically developing readers (11- to 13-year-olds and a group of 25 chronologically matched dyslexic children. In typically developing readers, both orthographic decoding and integration of reading sub-components contributed significantly to the overall prediction of text reading fluency. The model prediction was higher (from ca. 37% to 52% of the explained variance when we included the naming of discrete digits variable, which had a suppressive effect on pseudo-word reading. In the dyslexic readers, the variance explained by the two-factor model was high (69% and did not change when the third factor was added. The lack of a suppression effect was likely due to the prominent individual differences in poor orthographic decoding of the dyslexic children. Analyses on data from both groups of children were replicated by using patches of colours as stimuli (both in the RAN task and in the discrete naming task obtaining similar results. We conclude that it is possible to predict much of the variance in text-reading fluency using basic processes, such as orthographic decoding and integration of reading sub-components, even without taking into consideration higher-order linguistic factors such as lexical, semantic and contextual abilities. The approach validity of using proximal vs distal causes to predict reading fluency is

  2. Incorporation of a high-roughness lower boundary into a mesoscale model for studies of dry deposition over complex terrain

    Science.gov (United States)

    Physick, W. L.; Garratt, J. R.

    1995-04-01

    For flow over natural surfaces, there exists a roughness sublayer within the atmospheric surface layer near the boundary. In this sublayer (typically 50 z 0 deep in unstable conditions), the Monin-Obukhov (M-O) flux profile relations for homogeneous surfaces cannot be applied. We have incorporated a modified form of the M-O stability functions (Garratt, 1978, 1980, 1983) in a mesoscale model to take account of this roughness sublayer and examined the diurnal variation of the boundary-layer wind and temperature profiles with and without these modifications. We have also investigated the effect of the modified M-O functions on the aerodynamic and laminar-sublayer resistances associated with the transfer of trace gases to vegetation. Our results show that when an observation height or the lowest level in a model is within the roughness sublayer, neglect of the flux-profile modifications leads to an underestimate of resistances by 7% at the most.

  3. Using AGWA and the KINEROS2 Model-to-Model Green Infrastructure in Two Typical Residential Lots in Prescott, AZ

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment (AGWA) Urban tool provides a step-by-step process to model subdivisions using the KINEROS2 model, with and without Green Infrastructure (GI) practices. AGWA utilizes the Kinematic Runoff and Erosion (KINEROS2) model, an event driven, ...

  4. Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH

    International Nuclear Information System (INIS)

    Niemi, A.; Bodvarsson, G.S.; Pruess, K.

    1991-11-01

    As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user's manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented

  5. Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation

    OpenAIRE

    Min Yan; Xin Tian; Zengyuan Li; Erxue Chen; Xufeng Wang; Zongtao Han; Hong Sun

    2016-01-01

    This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC) using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17) model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST) sensitivity analysis. Then the optimized MOD_17 mo...

  6. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  7. Incorporation of composite defects from ultrasonic NDE into CAD and FE models

    Science.gov (United States)

    Bingol, Onur Rauf; Schiefelbein, Bryan; Grandin, Robert J.; Holland, Stephen D.; Krishnamurthy, Adarsh

    2017-02-01

    Fiber-reinforced composites are widely used in aerospace industry due to their combined properties of high strength and low weight. However, owing to their complex structure, it is difficult to assess the impact of manufacturing defects and service damage on their residual life. While, ultrasonic testing (UT) is the preferred NDE method to identify the presence of defects in composites, there are no reasonable ways to model the damage and evaluate the structural integrity of composites. We have developed an automated framework to incorporate flaws and known composite damage automatically into a finite element analysis (FEA) model of composites, ultimately aiding in accessing the residual life of composites and make informed decisions regarding repairs. The framework can be used to generate a layer-by-layer 3D structural CAD model of the composite laminates replicating their manufacturing process. Outlines of structural defects, such as delaminations, are automatically detected from UT of the laminate and are incorporated into the CAD model between the appropriate layers. In addition, the framework allows for direct structural analysis of the resulting 3D CAD models with defects by automatically applying the appropriate boundary conditions. In this paper, we show a working proof-of-concept for the composite model builder with capabilities of incorporating delaminations between laminate layers and automatically preparing the CAD model for structural analysis using a FEA software.

  8. Incorporating Social Anxiety Into a Model of College Problem Drinking: Replication and Extension

    OpenAIRE

    Ham, Lindsay S.; Hope, Debra A.

    2006-01-01

    Although research has found an association between social anxiety and alcohol use in noncollege samples, results have been mixed for college samples. College students face many novel social situations in which they may drink to reduce social anxiety. In the current study, the authors tested a model of college problem drinking, incorporating social anxiety and related psychosocial variables among 228 undergraduate volunteers. According to structural equation modeling (SEM) results, social anxi...

  9. PWR plant operator training used full scope simulator incorporated MAAP model

    International Nuclear Information System (INIS)

    Matsumoto, Y.; Tabuchi, T.; Yamashita, T.; Komatsu, Y.; Tsubouchi, K.; Banka, T.; Mochizuki, T.; Nishimura, K.; Iizuka, H.

    2015-01-01

    NTC makes an effort with the understanding of plant behavior of core damage accident as part of our advanced training. For the Fukushima Daiichi Nuclear Power Station accident, we introduced the MAAP model into PWR operator training full scope simulator and also made the Severe Accident Visual Display unit. From 2014, we will introduce new training program for a core damage accident with PWR operator training full scope simulator incorporated the MAAP model and the Severe Accident Visual Display unit. (author)

  10. INCORPORATING MULTIPLE OBJECTIVES IN PLANNING MODELS OF LOW-RESOURCE FARMERS

    OpenAIRE

    Flinn, John C.; Jayasuriya, Sisira; Knight, C. Gregory

    1980-01-01

    Linear goal programming provides a means of formally incorporating the multiple goals of a household into the analysis of farming systems. Using this approach, the set of plans which come as close as possible to achieving a set of desired goals under conditions of land and cash scarcity are derived for a Filipino tenant farmer. A challenge in making LGP models empirically operational is the accurate definition of the goals of the farm household being modelled.

  11. Incorporating Parameter Uncertainty in Bayesian Segmentation Models: Application to Hippocampal Subfield Volumetry

    DEFF Research Database (Denmark)

    Iglesias, J. E.; Sabuncu, M. R.; Van Leemput, Koen

    2012-01-01

    Many successful segmentation algorithms are based on Bayesian models in which prior anatomical knowledge is combined with the available image information. However, these methods typically have many free parameters that are estimated to obtain point estimates only, whereas a faithful Bayesian anal...

  12. Evaluation of water conservation capacity of loess plateau typical mountain ecosystems based on InVEST model simulation

    Science.gov (United States)

    Lv, Xizhi; Zuo, Zhongguo; Xiao, Peiqing

    2017-06-01

    With increasing demand for water resources and frequently a general deterioration of local water resources, water conservation by forests has received considerable attention in recent years. To evaluate water conservation capacities of different forest ecosystems in mountainous areas of Loess Plateau, the landscape of forests was divided into 18 types in Loess Plateau. Under the consideration of the factors such as climate, topography, plant, soil and land use, the water conservation of the forest ecosystems was estimated by means of InVEST model. The result showed that 486417.7 hm2 forests in typical mountain areas were divided into 18 forest types, and the total water conservation quantity was 1.64×1012m3, equaling an average of water conversation quantity of 9.09×1010m3. There is a great difference in average water conversation capacity among various forest types. The water conservation function and its evaluation is crucial and complicated issues in the study of ecological service function in modern times.

  13. A climatological model for risk computations incorporating site- specific dry deposition influences

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.

    1991-07-01

    A gradient-flux dry deposition module was developed for use in a climatological atmospheric transport model, the Multimedia Environmental Pollutant Assessment System (MEPAS). The atmospheric pathway model computes long-term average contaminant air concentration and surface deposition patterns surrounding a potential release site incorporating location-specific dry deposition influences. Gradient-flux formulations are used to incorporate site and regional data in the dry deposition module for this atmospheric sector-average climatological model. Application of these formulations provide an effective means of accounting for local surface roughness in deposition computations. Linkage to a risk computation module resulted in a need for separate regional and specific surface deposition computations. 13 refs., 4 figs., 2 tabs

  14. In silico investigation of the short QT syndrome, using human ventricle models incorporating electromechanical coupling

    Directory of Open Access Journals (Sweden)

    Ismail eAdeniran

    2013-07-01

    Full Text Available Introduction Genetic forms of the Short QT Syndrome (SQTS arise due to cardiac ion channel mutations leading to accelerated ventricular repolarisation, arrhythmias and sudden cardiac death. Results from experimental and simulation studies suggest that changes to refractoriness and tissue vulnerability produce a substrate favourable to re-entry. Potential electromechanical consequences of the SQTS are less well understood. The aim of this study was to utilize electromechanically coupled human ventricle models to explore electromechanical consequences of the SQTS. Methods and results: The Rice et al. mechanical model was coupled to the ten Tusscher et al. ventricular cell model. Previously validated K+ channel formulations for SQT variants 1 and 3 were incorporated. Functional effects of the SQTS mutations on transients, sarcomere length shortening and contractile force at the single cell level were evaluated with and without the consideration of stretch activated channel current (Isac. Without Isac, the SQTS mutations produced dramatic reductions in the amplitude of transients, sarcomere length shortening and contractile force. When Isac was incorporated, there was a considerable attenuation of the effects of SQTS-associated action potential shortening on Ca2+ transients, sarcomere shortening and contractile force. Single cell models were then incorporated into 3D human ventricular tissue models. The timing of maximum deformation was delayed in the SQTS setting compared to control. Conclusion: The incorporation of Isac appears to be an important consideration in modelling functional effects of SQT 1 and 3 mutations on cardiac electro-mechanical coupling. Whilst there is little evidence of profoundly impaired cardiac contractile function in SQTS patients, our 3D simulations correlate qualitatively with reported evidence for dissociation between ventricular repolarization and the end of mechanical systole.

  15. Improving Watershed-Scale Hydrodynamic Models by Incorporating Synthetic 3D River Bathymetry Network

    Science.gov (United States)

    Dey, S.; Saksena, S.; Merwade, V.

    2017-12-01

    Digital Elevation Models (DEMs) have an incomplete representation of river bathymetry, which is critical for simulating river hydrodynamics in flood modeling. Generally, DEMs are augmented with field collected bathymetry data, but such data are available only at individual reaches. Creating a hydrodynamic model covering an entire stream network in the basin requires bathymetry for all streams. This study extends a conceptual bathymetry model, River Channel Morphology Model (RCMM), to estimate the bathymetry for an entire stream network for application in hydrodynamic modeling using a DEM. It is implemented at two large watersheds with different relief and land use characterizations: coastal Guadalupe River basin in Texas with flat terrain and a relatively urban White River basin in Indiana with more relief. After bathymetry incorporation, both watersheds are modeled using HEC-RAS (1D hydraulic model) and Interconnected Pond and Channel Routing (ICPR), a 2-D integrated hydrologic and hydraulic model. A comparison of the streamflow estimated by ICPR at the outlet of the basins indicates that incorporating bathymetry influences streamflow estimates. The inundation maps show that bathymetry has a higher impact on flat terrains of Guadalupe River basin when compared to the White River basin.

  16. Incorporating spatial autocorrelation into species distribution models alters forecasts of climate-mediated range shifts.

    Science.gov (United States)

    Crase, Beth; Liedloff, Adam; Vesk, Peter A; Fukuda, Yusuke; Wintle, Brendan A

    2014-08-01

    Species distribution models (SDMs) are widely used to forecast changes in the spatial distributions of species and communities in response to climate change. However, spatial autocorrelation (SA) is rarely accounted for in these models, despite its ubiquity in broad-scale ecological data. While spatial autocorrelation in model residuals is known to result in biased parameter estimates and the inflation of type I errors, the influence of unmodeled SA on species' range forecasts is poorly understood. Here we quantify how accounting for SA in SDMs influences the magnitude of range shift forecasts produced by SDMs for multiple climate change scenarios. SDMs were fitted to simulated data with a known autocorrelation structure, and to field observations of three mangrove communities from northern Australia displaying strong spatial autocorrelation. Three modeling approaches were implemented: environment-only models (most frequently applied in species' range forecasts), and two approaches that incorporate SA; autologistic models and residuals autocovariate (RAC) models. Differences in forecasts among modeling approaches and climate scenarios were quantified. While all model predictions at the current time closely matched that of the actual current distribution of the mangrove communities, under the climate change scenarios environment-only models forecast substantially greater range shifts than models incorporating SA. Furthermore, the magnitude of these differences intensified with increasing increments of climate change across the scenarios. When models do not account for SA, forecasts of species' range shifts indicate more extreme impacts of climate change, compared to models that explicitly account for SA. Therefore, where biological or population processes induce substantial autocorrelation in the distribution of organisms, and this is not modeled, model predictions will be inaccurate. These results have global importance for conservation efforts as inaccurate

  17. Making a difference: incorporating theories of autonomy into models of informed consent.

    Science.gov (United States)

    Delany, C

    2008-09-01

    Obtaining patients' informed consent is an ethical and legal obligation in healthcare practice. Whilst the law provides prescriptive rules and guidelines, ethical theories of autonomy provide moral foundations. Models of practice of consent, have been developed in the bioethical literature to assist in understanding and integrating the ethical theory of autonomy and legal obligations into the clinical process of obtaining a patient's informed consent to treatment. To review four models of consent and analyse the way each model incorporates the ethical meaning of autonomy and how, as a consequence, they might change the actual communicative process of obtaining informed consent within clinical contexts. An iceberg framework of consent is used to conceptualise how ethical theories of autonomy are positioned and underpin the above surface, and visible clinical communication, including associated legal guidelines and ethical rules. Each model of consent is critically reviewed from the perspective of how it might shape the process of informed consent. All four models would alter the process of obtaining consent. Two models provide structure and guidelines for the content and timing of obtaining patients' consent. The two other models rely on an attitudinal shift in clinicians. They provide ideas for consent by focusing on underlying values, attitudes and meaning associated with the ethical meaning of autonomy. The paper concludes that models of practice that explicitly incorporate the underlying ethical meaning of autonomy as their basis, provide less prescriptive, but more theoretically rich guidance for healthcare communicative practices.

  18. Incorporation of human factors into ship collision risk models focusing on human centred design aspects

    International Nuclear Information System (INIS)

    Sotiralis, P.; Ventikos, N.P.; Hamann, R.; Golyshev, P.; Teixeira, A.P.

    2016-01-01

    This paper presents an approach that more adequately incorporates human factor considerations into quantitative risk analysis of ship operation. The focus is on the collision accident category, which is one of the main risk contributors in ship operation. The approach is based on the development of a Bayesian Network (BN) model that integrates elements from the Technique for Retrospective and Predictive Analysis of Cognitive Errors (TRACEr) and focuses on the calculation of the collision accident probability due to human error. The model takes into account the human performance in normal, abnormal and critical operational conditions and implements specific tasks derived from the analysis of the task errors leading to the collision accident category. A sensitivity analysis is performed to identify the most important contributors to human performance and ship collision. Finally, the model developed is applied to assess the collision risk of a feeder operating in Dover strait using the collision probability estimated by the developed BN model and an Event tree model for calculation of human, economic and environmental risks. - Highlights: • A collision risk model for the incorporation of human factors into quantitative risk analysis is proposed. • The model takes into account the human performance in different operational conditions leading to the collision. • The most important contributors to human performance and ship collision are identified. • The model developed is applied to assess the collision risk of a feeder operating in Dover strait.

  19. Modeling fraud detection and the incorporation of forensic specialists in the audit process

    DEFF Research Database (Denmark)

    Sakalauskaite, Dominyka

    Financial statement audits are still comparatively poor in fraud detection. Forensic specialists can play a significant role in increasing audit quality. In this paper, based on prior academic research, I develop a model of fraud detection and the incorporation of forensic specialists in the audit...... process. The intention of the model is to identify the reasons why the audit is weak in fraud detection and to provide the analytical framework to assess whether the incorporation of forensic specialists can help to improve it. The results show that such specialists can potentially improve the fraud...... detection in the audit, but might also cause some negative implications. Overall, even though fraud detection is one of the main topics in research there are very few studies done on the subject of how auditors co-operate with forensic specialists. Thus, the paper concludes with suggestions for further...

  20. Gold Incorporated Mesoporous Silica Thin Film Model Surface as a Robust SERS and Catalytically Active Substrate

    Directory of Open Access Journals (Sweden)

    Anandakumari Chandrasekharan Sunil Sekhar

    2016-05-01

    Full Text Available Ultra-small gold nanoparticles incorporated in mesoporous silica thin films with accessible pore channels perpendicular to the substrate are prepared by a modified sol-gel method. The simple and easy spin coating technique is applied here to make homogeneous thin films. The surface characterization using FESEM shows crack-free films with a perpendicular pore arrangement. The applicability of these thin films as catalysts as well as a robust SERS active substrate for model catalysis study is tested. Compared to bare silica film our gold incorporated silica, GSM-23F gave an enhancement factor of 103 for RhB with a laser source 633 nm. The reduction reaction of p-nitrophenol with sodium borohydride from our thin films shows a decrease in peak intensity corresponding to –NO2 group as time proceeds, confirming the catalytic activity. Such model surfaces can potentially bridge the material gap between a real catalytic system and surface science studies.

  1. Vertical structure of currents in Algeciras Bay (Strait of Gibraltar): implications on oil spill modeling under different typical scenarios

    Science.gov (United States)

    Megías Trujillo, Bárbara; Caballero de Frutos, Isabel; López Comi, Laura; Tejedor Alvarez, Begoña.; Izquierdo González, Alfredo; Gonzales Mejías, Carlos Jose; Alvarez Esteban, Óscar; Mañanes Salinas, Rafael; Comerma, Eric

    2010-05-01

    Algeciras Bay constitutes a physical environment of special characteristics, due to its bathymetric configuration and geographical location, at the eastern boundary of the Strait of Gibraltar. Hence, the Bay is subject to the complex hydrodynamics of the Strait of Gibraltar, characterized by a mesotidal, semidiurnal regime and the high density-stratification of the water column due to the presence of the upper Atlantic and the lower Mediterranean (more salty and cold) water layers. In addition, this environment is affected by powerful Easterly and Westerly winds episodes. The intense maritime traffic of oil tankers sailing across the Strait and inside the Bay, together with the presence of an oil refinery at its northern coast, imply high risks of oil spilling inside these waters, and unfortunately it has constituted a matter of usual occurrence through the last decades. The above paragraph clearly manifests the necessity of a detailed knowledge on the Bay's hydrodynamics, and the related system of currents, for a correct management and contingency planning in case of oil spilling in this environment. In order to evaluate the range of affectation of oil spills in the Bay's waters and coasts, the OILMAP oil spill model was used, the currents fields being provided by the three-dimensional, nonlinear, finite-differences, sigma-coordinates, UCA 3D hydrodynamic model. Numerical simulations were carried out for a grid domain extended from the western Strait boundary to the Alboran Sea, having a horizontal spatial resolution of 500 m and 50 sigma-levels in the vertical dimension. The system was forced by the tidal constituents M2 (main semidiurnal) and Z0 (constant or zero-frequency), considering three different typical wind conditions: Easterlies, Westerlies and calm (no wind). The most remarkable results from the numerical 3D simulations of Algeciras Bay's hydrodynamics were: a) the occurrence of opposite tidal currents between the upper Atlantic and lower Mediterranean

  2. Incorporation of stochastic engineering models as prior information in Bayesian medical device trials.

    Science.gov (United States)

    Haddad, Tarek; Himes, Adam; Thompson, Laura; Irony, Telba; Nair, Rajesh

    2017-01-01

    Evaluation of medical devices via clinical trial is often a necessary step in the process of bringing a new product to market. In recent years, device manufacturers are increasingly using stochastic engineering models during the product development process. These models have the capability to simulate virtual patient outcomes. This article presents a novel method based on the power prior for augmenting a clinical trial using virtual patient data. To properly inform clinical evaluation, the virtual patient model must simulate the clinical outcome of interest, incorporating patient variability, as well as the uncertainty in the engineering model and in its input parameters. The number of virtual patients is controlled by a discount function which uses the similarity between modeled and observed data. This method is illustrated by a case study of cardiac lead fracture. Different discount functions are used to cover a wide range of scenarios in which the type I error rates and power vary for the same number of enrolled patients. Incorporation of engineering models as prior knowledge in a Bayesian clinical trial design can provide benefits of decreased sample size and trial length while still controlling type I error rate and power.

  3. A code reviewer assignment model incorporating the competence differences and participant preferences

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2016-03-01

    Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.

  4. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    International Nuclear Information System (INIS)

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-01-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  5. Global dynamics of a PDE model for aedes aegypti mosquitoe incorporating female sexual preference

    KAUST Repository

    Parshad, Rana

    2011-01-01

    In this paper we study the long time dynamics of a reaction diffusion system, describing the spread of Aedes aegypti mosquitoes, which are the primary cause of dengue infection. The system incorporates a control attempt via the sterile insect technique. The model incorporates female mosquitoes sexual preference for wild males over sterile males. We show global existence of strong solution for the system. We then derive uniform estimates to prove the existence of a global attractor in L-2(Omega), for the system. The attractor is shown to be L-infinity(Omega) regular and posess state of extinction, if the injection of sterile males is large enough. We also provide upper bounds on the Hausdorff and fractal dimensions of the attractor.

  6. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    Science.gov (United States)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  7. Modeling water scarcity over south Asia: Incorporating crop growth and irrigation models into the Variable Infiltration Capacity (VIC) model

    Science.gov (United States)

    Troy, Tara J.; Ines, Amor V. M.; Lall, Upmanu; Robertson, Andrew W.

    2013-04-01

    Large-scale hydrologic models, such as the Variable Infiltration Capacity (VIC) model, are used for a variety of studies, from drought monitoring to projecting the potential impact of climate change on the hydrologic cycle decades in advance. The majority of these models simulates the natural hydrological cycle and neglects the effects of human activities such as irrigation, which can result in streamflow withdrawals and increased evapotranspiration. In some parts of the world, these activities do not significantly affect the hydrologic cycle, but this is not the case in south Asia where irrigated agriculture has a large water footprint. To address this gap, we incorporate a crop growth model and irrigation model into the VIC model in order to simulate the impacts of irrigated and rainfed agriculture on the hydrologic cycle over south Asia (Indus, Ganges, and Brahmaputra basin and peninsular India). The crop growth model responds to climate signals, including temperature and water stress, to simulate the growth of maize, wheat, rice, and millet. For the primarily rainfed maize crop, the crop growth model shows good correlation with observed All-India yields (0.7) with lower correlations for the irrigated wheat and rice crops (0.4). The difference in correlation is because irrigation provides a buffer against climate conditions, so that rainfed crop growth is more tied to climate than irrigated crop growth. The irrigation water demands induce hydrologic water stress in significant parts of the region, particularly in the Indus, with the streamflow unable to meet the irrigation demands. Although rainfall can vary significantly in south Asia, we find that water scarcity is largely chronic due to the irrigation demands rather than being intermittent due to climate variability.

  8. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  9. A data-driven model for influenza transmission incorporating media effects.

    Science.gov (United States)

    Mitchell, Lewis; Ross, Joshua V

    2016-10-01

    Numerous studies have attempted to model the effect of mass media on the transmission of diseases such as influenza; however, quantitative data on media engagement has until recently been difficult to obtain. With the recent explosion of 'big data' coming from online social media and the like, large volumes of data on a population's engagement with mass media during an epidemic are becoming available to researchers. In this study, we combine an online dataset comprising millions of shared messages relating to influenza with traditional surveillance data on flu activity to suggest a functional form for the relationship between the two. Using this data, we present a simple deterministic model for influenza dynamics incorporating media effects, and show that such a model helps explain the dynamics of historical influenza outbreaks. Furthermore, through model selection we show that the proposed media function fits historical data better than other media functions proposed in earlier studies.

  10. Towards a functional model of mental disorders incorporating the laws of thermodynamics.

    Science.gov (United States)

    Murray, George C; McKenzie, Karen

    2013-05-01

    The current paper presents the hypothesis that the understanding of mental disorders can be advanced by incorporating the laws of thermodynamics, specifically relating to energy conservation and energy transfer. These ideas, along with the introduction of the notion that entropic activities are symptomatic of inefficient energy transfer or disorder, were used to propose a model of understanding mental ill health as resulting from the interaction of entropy, capacity and work (environmental demands). The model was applied to Attention Deficit Hyperactivity Disorder, and was shown to be compatible with current thinking about this condition, as well as emerging models of mental disorders as complex networks. A key implication of the proposed model is that it argues that all mental disorders require a systemic functional approach, with the advantage that it offers a number of routes into the assessment, formulation and treatment for mental health problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Incorporating ligament laxity in a finite element model for the upper cervical spine.

    Science.gov (United States)

    Lasswell, Timothy L; Cronin, Duane S; Medley, John B; Rasoulinejad, Parham

    2017-11-01

    Predicting physiological range of motion (ROM) using a finite element (FE) model of the upper cervical spine requires the incorporation of ligament laxity. The effect of ligament laxity can be observed only on a macro level of joint motion and is lost once ligaments have been dissected and preconditioned for experimental testing. As a result, although ligament laxity values are recognized to exist, specific values are not directly available in the literature for use in FE models. The purpose of the current study is to propose an optimization process that can be used to determine a set of ligament laxity values for upper cervical spine FE models. Furthermore, an FE model that includes ligament laxity is applied, and the resulting ROM values are compared with experimental data for physiological ROM, as well as experimental data for the increase in ROM when a Type II odontoid fracture is introduced. The upper cervical spine FE model was adapted from a 50th percentile male full-body model developed with the Global Human Body Models Consortium (GHBMC). FE modeling was performed in LS-DYNA and LS-OPT (Livermore Software Technology Group) was used for ligament laxity optimization. Ordinate-based curve matching was used to minimize the mean squared error (MSE) between computed load-rotation curves and experimental load-rotation curves under flexion, extension, and axial rotation with pure moment loads from 0 to 3.5 Nm. Lateral bending was excluded from the optimization because the upper cervical spine was considered to be primarily responsible for flexion, extension, and axial rotation. Based on recommendations from the literature, four varying inputs representing laxity in select ligaments were optimized to minimize the MSE. Funding was provided by the Natural Sciences and Engineering Research Council of Canada as well as GHMBC. The present study was funded by the Natural Sciences and Engineering Research Council of Canada to support the work of one graduate student

  12. Incorporating microbiota data into epidemiologic models: examples from vaginal microbiota research.

    Science.gov (United States)

    van de Wijgert, Janneke H; Jespers, Vicky

    2016-05-01

    Next generation sequencing and quantitative polymerase chain reaction technologies are now widely available, and research incorporating these methods is growing exponentially. In the vaginal microbiota (VMB) field, most research to date has been descriptive. The purpose of this article is to provide an overview of different ways in which next generation sequencing and quantitative polymerase chain reaction data can be used to answer clinical epidemiologic research questions using examples from VMB research. We reviewed relevant methodological literature and VMB articles (published between 2008 and 2015) that incorporated these methodologies. VMB data have been analyzed using ecologic methods, methods that compare the presence or relative abundance of individual taxa or community compositions between different groups of women or sampling time points, and methods that first reduce the complexity of the data into a few variables followed by the incorporation of these variables into traditional biostatistical models. To make future VMB research more clinically relevant (such as studying associations between VMB compositions and clinical outcomes and the effects of interventions on the VMB), it is important that these methods are integrated with rigorous epidemiologic methods (such as appropriate study designs, sampling strategies, and adjustment for confounding). Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  13. Fuzzy Case-Based Reasoning in Product Style Acquisition Incorporating Valence-Arousal-Based Emotional Cellular Model

    Directory of Open Access Journals (Sweden)

    Fuqian Shi

    2012-01-01

    Full Text Available Emotional cellular (EC, proposed in our previous works, is a kind of semantic cell that contains kernel and shell and the kernel is formalized by a triple- L = , where P denotes a typical set of positive examples relative to word-L, d is a pseudodistance measure on emotional two-dimensional space: valence-arousal, and δ is a probability density function on positive real number field. The basic idea of EC model is to assume that the neighborhood radius of each semantic concept is uncertain, and this uncertainty will be measured by one-dimensional density function δ. In this paper, product form features were evaluated by using ECs and to establish the product style database, fuzzy case based reasoning (FCBR model under a defined similarity measurement based on fuzzy nearest neighbors (FNN incorporating EC was applied to extract product styles. A mathematical formalized inference system for product style was also proposed, and it also includes uncertainty measurement tool emotional cellular. A case study of style acquisition of mobile phones illustrated the effectiveness of the proposed methodology.

  14. Teaching Genetic Counseling Skills: Incorporating a Genetic Counseling Adaptation Continuum Model to Address Psychosocial Complexity.

    Science.gov (United States)

    Shugar, Andrea

    2017-04-01

    Genetic counselors are trained health care professionals who effectively integrate both psychosocial counseling and information-giving into their practice. Preparing genetic counseling students for clinical practice is a challenging task, particularly when helping them develop effective and active counseling skills. Resistance to incorporating these skills may stem from decreased confidence, fear of causing harm or a lack of clarity of psycho-social goals. The author reflects on the personal challenges experienced in teaching genetic counselling students to work with psychological and social complexity, and proposes a Genetic Counseling Adaptation Continuum model and methodology to guide students in the use of advanced counseling skills.

  15. Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms.

    Science.gov (United States)

    Nguyen, Thang Tat; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E; Lee, Choonsik; Chung, Beom Sun

    2015-11-21

    The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.

  16. Model for incorporating fuel swelling and clad shrinkage effects in diffusion theory calculations (LWBR Development Program)

    International Nuclear Information System (INIS)

    Schick, W.C. Jr.; Milani, S.; Duncombe, E.

    1980-03-01

    A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model

  17. A Fibrocontractive Mechanochemical Model of Dermal Wound Closure Incorporating Realistic Growth Factor Kinetics

    KAUST Repository

    Murphy, Kelly E.

    2012-01-13

    Fibroblasts and their activated phenotype, myofibroblasts, are the primary cell types involved in the contraction associated with dermal wound healing. Recent experimental evidence indicates that the transformation from fibroblasts to myofibroblasts involves two distinct processes: The cells are stimulated to change phenotype by the combined actions of transforming growth factor β (TGFβ) and mechanical tension. This observation indicates a need for a detailed exploration of the effect of the strong interactions between the mechanical changes and growth factors in dermal wound healing. We review the experimental findings in detail and develop a model of dermal wound healing that incorporates these phenomena. Our model includes the interactions between TGFβ and collagenase, providing a more biologically realistic form for the growth factor kinetics than those included in previous mechanochemical descriptions. A comparison is made between the model predictions and experimental data on human dermal wound healing and all the essential features are well matched. © 2012 Society for Mathematical Biology.

  18. Developing Baltic cod recruitment models II : Incorporation of environmental variability and species interaction

    DEFF Research Database (Denmark)

    Köster, Fritz; Hinrichsen, H.H.; St. John, Michael

    2001-01-01

    We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... affecting survival of eggs, predation by clupeids on eggs, larval transport, and cannibalism. Results showed that recruitment in the most important spawning area, the Bornholm Basin, during 1976-1995 was related to egg production; however, other factors affecting survival of the eggs (oxygen conditions......, predation) were also significant and when incorporated explained 69% of the variation in 0-group recruitment. In other spawning areas, variable hydrographic conditions did not allow for regular successful egg development. Hence, relatively simple models proved sufficient to predict recruitment of 0-group...

  19. Incorporating excitation-induced dephasing into the Maxwell-Bloch numerical modeling of photon echoes

    International Nuclear Information System (INIS)

    Burr, G.W.; Harris, Todd L.; Babbitt, Wm. Randall; Jefferson, C. Michael

    2004-01-01

    We describe the incorporation of excitation-induced dephasing (EID) into the Maxwell-Bloch numerical simulation of photon echoes. At each time step of the usual numerical integration, stochastic frequency jumps of ions--caused by excitation of neighboring ions--is modeled by convolving each Bloch vector with the Bloch vectors of nearby frequency detunings. The width of this convolution kernel follows the instantaneous change in overall population, integrated over the simulated bandwidth. This approach is validated by extensive comparison against published and original experimental results. The enhanced numerical model is then used to investigate the accuracy of experiments designed to extrapolate to the intrinsic dephasing time T 2 from data taken in the presence of EID. Such a modeling capability offers improved understanding of experimental results, and should allow quantitative analysis of engineering tradeoffs in realistic optical coherent transient applications

  20. Affordances perspective and grammaticalization: Incorporation of language, environment and users in the model of semantic paths

    Directory of Open Access Journals (Sweden)

    Alexander Andrason

    2015-12-01

    Full Text Available The present paper demonstrates that insights from the affordances perspective can contribute to developing a more comprehensive model of grammaticalization. The authors argue that the grammaticalization process is afforded differently depending on the values of three contributing parameters: the factor (schematized as a qualitative-quantitative map or a wave of a gram, environment (understood as the structure of the stream along which the gram travels, and actor (narrowed to certain cognitive-epistemological capacities of the users, in particular to the fact of being a native speaker. By relating grammaticalization to these three parameters and by connecting it to the theory of optimization, the proposed model offers a better approximation to realistic cases of grammaticalization: The actor and environment are overtly incorporated into the model and divergences from canonical grammaticalization paths are both tolerated and explicable.

  1. A Fibrocontractive Mechanochemical Model of Dermal Wound Closure Incorporating Realistic Growth Factor Kinetics

    KAUST Repository

    Murphy, Kelly E.; Hall, Cameron L.; Maini, Philip K.; McCue, Scott W.; McElwain, D. L. Sean

    2012-01-01

    Fibroblasts and their activated phenotype, myofibroblasts, are the primary cell types involved in the contraction associated with dermal wound healing. Recent experimental evidence indicates that the transformation from fibroblasts to myofibroblasts involves two distinct processes: The cells are stimulated to change phenotype by the combined actions of transforming growth factor β (TGFβ) and mechanical tension. This observation indicates a need for a detailed exploration of the effect of the strong interactions between the mechanical changes and growth factors in dermal wound healing. We review the experimental findings in detail and develop a model of dermal wound healing that incorporates these phenomena. Our model includes the interactions between TGFβ and collagenase, providing a more biologically realistic form for the growth factor kinetics than those included in previous mechanochemical descriptions. A comparison is made between the model predictions and experimental data on human dermal wound healing and all the essential features are well matched. © 2012 Society for Mathematical Biology.

  2. Incorporating Yearly Derived Winter Wheat Maps Into Winter Wheat Yield Forecasting Model

    Science.gov (United States)

    Skakun, S.; Franch, B.; Roger, J.-C.; Vermote, E.; Becker-Reshef, I.; Justice, C.; Santamaría-Artigas, A.

    2016-01-01

    Wheat is one of the most important cereal crops in the world. Timely and accurate forecast of wheat yield and production at global scale is vital in implementing food security policy. Becker-Reshef et al. (2010) developed a generalized empirical model for forecasting winter wheat production using remote sensing data and official statistics. This model was implemented using static wheat maps. In this paper, we analyze the impact of incorporating yearly wheat masks into the forecasting model. We propose a new approach of producing in season winter wheat maps exploiting satellite data and official statistics on crop area only. Validation on independent data showed that the proposed approach reached 6% to 23% of omission error and 10% to 16% of commission error when mapping winter wheat 2-3 months before harvest. In general, we found a limited impact of using yearly winter wheat masks over a static mask for the study regions.

  3. Some considerations concerning the challenge of incorporating social variables into epidemiological models of infectious disease transmission.

    Science.gov (United States)

    Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet

    2015-01-01

    Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.

  4. Are adverse effects incorporated in economic models? An initial review of current practice.

    Science.gov (United States)

    Craig, D; McDaid, C; Fonseca, T; Stock, C; Duffy, S; Woolacott, N

    2009-12-01

    To identify methodological research on the incorporation of adverse effects in economic models and to review current practice. Major electronic databases (Cochrane Methodology Register, Health Economic Evaluations Database, NHS Economic Evaluation Database, EconLit, EMBASE, Health Management Information Consortium, IDEAS, MEDLINE and Science Citation Index) were searched from inception to September 2007. Health technology assessment (HTA) reports commissioned by the National Institute for Health Research (NIHR) HTA programme and published between 2004 and 2007 were also reviewed. The reviews of methodological research on the inclusion of adverse effects in decision models and of current practice were carried out according to standard methods. Data were summarised in a narrative synthesis. Of the 719 potentially relevant references in the methodological research review, five met the inclusion criteria; however, they contained little information of direct relevance to the incorporation of adverse effects in models. Of the 194 HTA monographs published from 2004 to 2007, 80 were reviewed, covering a range of research and therapeutic areas. In total, 85% of the reports included adverse effects in the clinical effectiveness review and 54% of the decision models included adverse effects in the model; 49% included adverse effects in the clinical review and model. The link between adverse effects in the clinical review and model was generally weak; only 3/80 (manipulation. Of the models including adverse effects, 67% used a clinical adverse effects parameter, 79% used a cost of adverse effects parameter, 86% used one of these and 60% used both. Most models (83%) used utilities, but only two (2.5%) used solely utilities to incorporate adverse effects and were explicit that the utility captured relevant adverse effects; 53% of those models that included utilities derived them from patients on treatment and could therefore be interpreted as capturing adverse effects. In total

  5. A realistic closed-form radiobiological model of clinical tumor-control data incorporating intertumor heterogeneity

    International Nuclear Information System (INIS)

    Roberts, Stephen A.; Hendry, Jolyon H.

    1998-01-01

    Purpose: To investigate the role of intertumor heterogeneity in clinical tumor control datasets and the relationship to in vitro measurements of tumor biopsy samples. Specifically, to develop a modified linear-quadratic (LQ) model incorporating such heterogeneity that it is practical to fit to clinical tumor-control datasets. Methods and Materials: We developed a modified version of the linear-quadratic (LQ) model for tumor control, incorporating a (lagged) time factor to allow for tumor cell repopulation. We explicitly took into account the interpatient heterogeneity in clonogen number, radiosensitivity, and repopulation rate. Using this model, we could generate realistic TCP curves using parameter estimates consistent with those reported from in vitro studies, subject to the inclusion of a radiosensitivity (or dose)-modifying factor. We then demonstrated that the model was dominated by the heterogeneity in α (tumor radiosensitivity) and derived an approximate simplified model incorporating this heterogeneity. This simplified model is expressible in a compact closed form, which it is practical to fit to clinical datasets. Using two previously analysed datasets, we fit the model using direct maximum-likelihood techniques and obtained parameter estimates that were, again, consistent with the experimental data on the radiosensitivity of primary human tumor cells. This heterogeneity model includes the same number of adjustable parameters as the standard LQ model. Results: The modified model provides parameter estimates that can easily be reconciled with the in vitro measurements. The simplified (approximate) form of the heterogeneity model is a compact, closed-form probit function that can readily be fitted to clinical series by conventional maximum-likelihood methodology. This heterogeneity model provides a slightly better fit to the datasets than the conventional LQ model, with the same numbers of fitted parameters. The parameter estimates of the clinically

  6. Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5

    Directory of Open Access Journals (Sweden)

    D. Wang

    2017-07-01

    Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.

  7. A constitutive mechanical model for gas hydrate bearing sediments incorporating inelastic mechanisms

    KAUST Repository

    Sánchez, Marcelo

    2016-11-30

    Gas hydrate bearing sediments (HBS) are natural soils formed in permafrost and sub-marine settings where the temperature and pressure conditions are such that gas hydrates are stable. If these conditions shift from the hydrate stability zone, hydrates dissociate and move from the solid to the gas phase. Hydrate dissociation is accompanied by significant changes in sediment structure and strongly affects its mechanical behavior (e.g., sediment stiffenss, strength and dilatancy). The mechanical behavior of HBS is very complex and its modeling poses great challenges. This paper presents a new geomechanical model for hydrate bearing sediments. The model incorporates the concept of partition stress, plus a number of inelastic mechanisms proposed to capture the complex behavior of this type of soil. This constitutive model is especially well suited to simulate the behavior of HBS upon dissociation. The model was applied and validated against experimental data from triaxial and oedometric tests conducted on manufactured and natural specimens involving different hydrate saturation, hydrate morphology, and confinement conditions. Particular attention was paid to model the HBS behavior during hydrate dissociation under loading. The model performance was highly satisfactory in all the cases studied. It managed to properly capture the main features of HBS mechanical behavior and it also assisted to interpret the behavior of this type of sediment under different loading and hydrate conditions.

  8. Incorporating vehicle mix in stimulus-response car-following models

    Directory of Open Access Journals (Sweden)

    Saidi Siuhi

    2016-06-01

    Full Text Available The objective of this paper is to incorporate vehicle mix in stimulus-response car-following models. Separate models were estimated for acceleration and deceleration responses to account for vehicle mix via both movement state and vehicle type. For each model, three sub-models were developed for different pairs of following vehicles including “automobile following automobile,” “automobile following truck,” and “truck following automobile.” The estimated model parameters were then validated against other data from a similar region and roadway. The results indicated that drivers' behaviors were significantly different among the different pairs of following vehicles. Also the magnitude of the estimated parameters depends on the type of vehicle being driven and/or followed. These results demonstrated the need to use separate models depending on movement state and vehicle type. The differences in parameter estimates confirmed in this paper highlight traffic safety and operational issues of mixed traffic operation on a single lane. The findings of this paper can assist transportation professionals to improve traffic simulation models used to evaluate the impact of different strategies on ameliorate safety and performance of highways. In addition, driver response time lag estimates can be used in roadway design to calculate important design parameters such as stopping sight distance on horizontal and vertical curves for both automobiles and trucks.

  9. Fuzzy Logic-Based Model That Incorporates Personality Traits for Heterogeneous Pedestrians

    Directory of Open Access Journals (Sweden)

    Zhuxin Xue

    2017-10-01

    Full Text Available Most models designed to simulate pedestrian dynamical behavior are based on the assumption that human decision-making can be described using precise values. This study proposes a new pedestrian model that incorporates fuzzy logic theory into a multi-agent system to address cognitive behavior that introduces uncertainty and imprecision during decision-making. We present a concept of decision preferences to represent the intrinsic control factors of decision-making. To realize the different decision preferences of heterogeneous pedestrians, the Five-Factor (OCEAN personality model is introduced to model the psychological characteristics of individuals. Then, a fuzzy logic-based approach is adopted for mapping the relationships between the personality traits and the decision preferences. Finally, we have developed an application using our model to simulate pedestrian dynamical behavior in several normal or non-panic scenarios, including a single-exit room, a hallway with obstacles, and a narrowing passage. The effectiveness of the proposed model is validated with a user study. The results show that the proposed model can generate more reasonable and heterogeneous behavior in the simulation and indicate that individual personality has a noticeable effect on pedestrian dynamical behavior.

  10. Discontinuous Galerkin Time-Domain Modeling of Graphene Nano-Ribbon Incorporating the Spatial Dispersion Effects

    KAUST Repository

    Li, Ping; Jiang, Li Jun; Bagci, Hakan

    2018-01-01

    It is well known that graphene demonstrates spatial dispersion properties, i.e., its conductivity is nonlocal and a function of spectral wave number (momentum operator) q. In this paper, to account for effects of spatial dispersion on transmission of high speed signals along graphene nano-ribbon (GNR) interconnects, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed. The atomically-thick GNR is modeled using a nonlocal transparent surface impedance boundary condition (SIBC) incorporated into the DGTD scheme. Since the conductivity is a complicated function of q (and one cannot find an analytical Fourier transform pair between q and spatial differential operators), an exact time domain SIBC model cannot be derived. To overcome this problem, the conductivity is approximated by its Taylor series in spectral domain under low-q assumption. This approach permits expressing the time domain SIBC in the form of a second-order partial differential equation (PDE) in current density and electric field intensity. To permit easy incorporation of this PDE with the DGTD algorithm, three auxiliary variables, which degenerate the second-order (temporal and spatial) differential operators to first-order ones, are introduced. Regarding to the temporal dispersion effects, the auxiliary differential equation (ADE) method is utilized to eliminates the expensive temporal convolutions. To demonstrate the applicability of the proposed scheme, numerical results, which involve characterization of spatial dispersion effects on the transfer impedance matrix of GNR interconnects, are presented.

  11. Discontinuous Galerkin Time-Domain Modeling of Graphene Nano-Ribbon Incorporating the Spatial Dispersion Effects

    KAUST Repository

    Li, Ping

    2018-04-13

    It is well known that graphene demonstrates spatial dispersion properties, i.e., its conductivity is nonlocal and a function of spectral wave number (momentum operator) q. In this paper, to account for effects of spatial dispersion on transmission of high speed signals along graphene nano-ribbon (GNR) interconnects, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed. The atomically-thick GNR is modeled using a nonlocal transparent surface impedance boundary condition (SIBC) incorporated into the DGTD scheme. Since the conductivity is a complicated function of q (and one cannot find an analytical Fourier transform pair between q and spatial differential operators), an exact time domain SIBC model cannot be derived. To overcome this problem, the conductivity is approximated by its Taylor series in spectral domain under low-q assumption. This approach permits expressing the time domain SIBC in the form of a second-order partial differential equation (PDE) in current density and electric field intensity. To permit easy incorporation of this PDE with the DGTD algorithm, three auxiliary variables, which degenerate the second-order (temporal and spatial) differential operators to first-order ones, are introduced. Regarding to the temporal dispersion effects, the auxiliary differential equation (ADE) method is utilized to eliminates the expensive temporal convolutions. To demonstrate the applicability of the proposed scheme, numerical results, which involve characterization of spatial dispersion effects on the transfer impedance matrix of GNR interconnects, are presented.

  12. Incorporating networks in a probabilistic graphical model to find drivers for complex human diseases.

    Science.gov (United States)

    Mezlini, Aziz M; Goldenberg, Anna

    2017-10-01

    Discovering genetic mechanisms driving complex diseases is a hard problem. Existing methods often lack power to identify the set of responsible genes. Protein-protein interaction networks have been shown to boost power when detecting gene-disease associations. We introduce a Bayesian framework, Conflux, to find disease associated genes from exome sequencing data using networks as a prior. There are two main advantages to using networks within a probabilistic graphical model. First, networks are noisy and incomplete, a substantial impediment to gene discovery. Incorporating networks into the structure of a probabilistic models for gene inference has less impact on the solution than relying on the noisy network structure directly. Second, using a Bayesian framework we can keep track of the uncertainty of each gene being associated with the phenotype rather than returning a fixed list of genes. We first show that using networks clearly improves gene detection compared to individual gene testing. We then show consistently improved performance of Conflux compared to the state-of-the-art diffusion network-based method Hotnet2 and a variety of other network and variant aggregation methods, using randomly generated and literature-reported gene sets. We test Hotnet2 and Conflux on several network configurations to reveal biases and patterns of false positives and false negatives in each case. Our experiments show that our novel Bayesian framework Conflux incorporates many of the advantages of the current state-of-the-art methods, while offering more flexibility and improved power in many gene-disease association scenarios.

  13. Advanced Methods for Incorporating Solar Energy Technologies into Electric Sector Capacity-Expansion Models: Literature Review and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, P.; Eurek, K.; Margolis, R.

    2014-07-01

    Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.

  14. Energy system investment model incorporating heat pumps with thermal storage in buildings and buffer tanks

    DEFF Research Database (Denmark)

    Hedegaard, Karsten; Balyk, Olexandr

    2013-01-01

    Individual compression heat pumps constitute a potentially valuable resource in supporting wind power integration due to their economic competitiveness and possibilities for flexible operation. When analysing the system benefits of flexible heat pump operation, effects on investments should...... be taken into account. In this study, we present a model that facilitates analysing individual heat pumps and complementing heat storages in integration with the energy system, while optimising both investments and operation. The model incorporates thermal building dynamics and covers various heat storage...... of operating heat pumps flexibly. This includes prioritising heat pump operation for hours with low marginal electricity production costs, and peak load shaving resulting in a reduced need for peak and reserve capacity investments....

  15. A MULTI-RESOLUTION FUSION MODEL INCORPORATING COLOR AND ELEVATION FOR SEMANTIC SEGMENTATION

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2017-05-01

    Full Text Available In recent years, the developments for Fully Convolutional Networks (FCN have led to great improvements for semantic segmentation in various applications including fused remote sensing data. There is, however, a lack of an in-depth study inside FCN models which would lead to an understanding of the contribution of individual layers to specific classes and their sensitivity to different types of input data. In this paper, we address this problem and propose a fusion model incorporating infrared imagery and Digital Surface Models (DSM for semantic segmentation. The goal is to utilize heterogeneous data more accurately and effectively in a single model instead of to assemble multiple models. First, the contribution and sensitivity of layers concerning the given classes are quantified by means of their recall in FCN. The contribution of different modalities on the pixel-wise prediction is then analyzed based on visualization. Finally, an optimized scheme for the fusion of layers with color and elevation information into a single FCN model is derived based on the analysis. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset. Comprehensive evaluations demonstrate the potential of the proposed approach.

  16. Modelling and Simulation of a Manipulator with Stable Viscoelastic Grasping Incorporating Friction

    Directory of Open Access Journals (Sweden)

    A. Khurshid

    2016-12-01

    Full Text Available Design, dynamics and control of a humanoid robotic hand based on anthropological dimensions, with joint friction, is modelled, simulated and analysed in this paper by using computer aided design and multibody dynamic simulation. Combined joint friction model is incorporated in the joints. Experimental values of coefficient of friction of grease lubricated sliding contacts representative of manipulator joints are presented. Human fingers deform to the shape of the grasped object (enveloping grasp at the area of interaction. A mass-spring-damper model of the grasp is developed. The interaction of the viscoelastic gripper of the arm with objects is analysed by using Bond Graph modelling method. Simulations were conducted for several material parameters. These results of the simulation are then used to develop a prototype of the proposed gripper. Bond graph model is experimentally validated by using the prototype. The gripper is used to successfully transport soft and fragile objects. This paper provides information on optimisation of friction and its inclusion in both dynamic modelling and simulation to enhance mechanical efficiency.

  17. Incorporating Social System Dynamics into the Food-Energy-Water System Resilience-Sustainability Modeling Process

    Science.gov (United States)

    Givens, J.; Padowski, J.; Malek, K.; Guzman, C.; Boll, J.; Adam, J. C.; Witinok-Huber, R.

    2017-12-01

    In the face of climate change and multi-scalar governance objectives, achieving resilience of food-energy-water (FEW) systems requires interdisciplinary approaches. Through coordinated modeling and management efforts, we study "Innovations in the Food-Energy-Water Nexus (INFEWS)" through a case-study in the Columbia River Basin. Previous research on FEW system management and resilience includes some attention to social dynamics (e.g., economic, governance); however, more research is needed to better address social science perspectives. Decisions ultimately taken in this river basin would occur among stakeholders encompassing various institutional power structures including multiple U.S. states, tribal lands, and sovereign nations. The social science lens draws attention to the incompatibility between the engineering definition of resilience (i.e., return to equilibrium or a singular stable state) and the ecological and social system realities, more explicit in the ecological interpretation of resilience (i.e., the ability of a system to move into a different, possibly more resilient state). Social science perspectives include but are not limited to differing views on resilience as normative, system persistence versus transformation, and system boundary issues. To expand understanding of resilience and objectives for complex and dynamic systems, concepts related to inequality, heterogeneity, power, agency, trust, values, culture, history, conflict, and system feedbacks must be more tightly integrated into FEW research. We identify gaps in knowledge and data, and the value and complexity of incorporating social components and processes into systems models. We posit that socio-biophysical system resilience modeling would address important complex, dynamic social relationships, including non-linear dynamics of social interactions, to offer an improved understanding of sustainable management in FEW systems. Conceptual modeling that is presented in our study, represents

  18. Developing a stochastic parameterization to incorporate plant trait variability into ecohydrologic modeling

    Science.gov (United States)

    Liu, S.; Ng, G. H. C.

    2017-12-01

    The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.

  19. A model to incorporate organ deformation in the evaluation of dose/volume relationship

    International Nuclear Information System (INIS)

    Yan, D.; Jaffray, D.; Wong, J.; Brabbins, D.; Martinez, A. A.

    1997-01-01

    Purpose: Measurements of internal organ motion have demonstrated that daily organ deformation exists during the course of radiation treatment. However, a model to evaluate the resultant dose delivered to a daily deformed organ remains a difficult challenge. Current methods which model such organ deformation as rigid body motion in the dose calculation for treatment planning evaluation are incorrect and misleading. In this study, a new model for treatment planning evaluation is introduced which incorporates patient specific information of daily organ deformation and setup variation. The model was also used to retrospectively analyze the actual treatment data measured using daily CT scans for 5 patients with prostate treatment. Methods and Materials: The model assumes that for each patient, the organ of interest can be measured during the first few treatment days. First, the volume of each organ is delineated from each of the daily measurements and cumulated in a 3D bit-map. A tissue occupancy distribution is then constructed with the 50% isodensity representing the mean, or effective, organ volume. During the course of treatment, each voxel in the effective organ volume is assumed to move inside a local 3D neighborhood with a specific distribution function. The neighborhood and the distribution function are deduced from the positions and shapes of the organ in the first few measurements using the biomechanics model of viscoelastic body. For each voxel, the local distribution function is then convolved with the spatial dose distribution. The latter includes also the variation in dose due to daily setup error. As a result, the cumulative dose to the voxel incorporates the effects of daily setup variation and organ deformation. A ''variation adjusted'' dose volume histogram, aDVH, for the effective organ volume can then be constructed for the purpose of treatment evaluation and optimization. Up to 20 daily CT scans and daily portal images for 5 patients with prostate

  20. Incorporating time-delays in S-System model for reverse engineering genetic networks.

    Science.gov (United States)

    Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan

    2013-06-18

    In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in

  1. Benefits of incorporating spatial organisation of catchments for a semi-distributed hydrological model

    Science.gov (United States)

    Schumann, Andreas; Oppel, Henning

    2017-04-01

    To represent the hydrological behaviour of catchments a model should reproduce/reflect the hydrologically most relevant catchment characteristics. These are heterogeneously distributed within a watershed but often interrelated and subject of a certain spatial organisation. Since common models are mostly based on fundamental assumptions about hydrological processes, the reduction of variance of catchment properties as well as the incorporation of the spatial organisation of the catchment is desirable. We have developed a method that combines the idea of the width-function used for determination of the geomorphologic unit hydrograph with information about soil or topography. With this method we are able to assess the spatial organisation of selected catchment characteristics. An algorithm was developed that structures a watershed into sub-basins and other spatial units to minimise its heterogeneity. The outcomes of this algorithm are used for the spatial setup of a semi-distributed model. Since the spatial organisation of a catchment is not bound to a single characteristic, we have to embed information of multiple catchment properties. For this purpose we applied a fuzzy-based method to combine the spatial setup for multiple single characteristics into a union, optimal spatial differentiation. Utilizing this method, we are able to propose a spatial structure for a semi-distributed hydrological model, comprising the definition of sub-basins and a zonal classification within each sub-basin. Besides the improved spatial structuring, the performed analysis ameliorates modelling in another way. The spatial variability of catchment characteristics, which is considered by a minimum of heterogeneity in the zones, can be considered in a parameter constrained calibration scheme in a case study both options were used to explore the benefits of incorporating the spatial organisation and derived parameter constraints for the parametrisation of a HBV-96 model. We use two benchmark

  2. Representation and Incorporation of Close Others' Responses: The RICOR Model of Social Influence.

    Science.gov (United States)

    Smith, Eliot R; Mackie, Diane M

    2015-08-03

    We propose a new model of social influence, which can occur spontaneously and in the absence of typically assumed motives. We assume that perceivers routinely construct representations of other people's experiences and responses (beliefs, attitudes, emotions, and behaviors), when observing others' responses or simulating the responses of unobserved others. Like representations made accessible by priming, these representations may then influence the process that generates perceivers' own responses, without intention or awareness, especially when there is a strong social connection to the other. We describe evidence for the basic properties and important moderators of this process, which distinguish it from other mechanisms such as informational, normative, or social identity influence. The model offers new perspectives on the role of others' values in producing cultural differences, the persistence and power of stereotypes, the adaptive reasons for being influenced by others' responses, and the impact of others' views about the self. © 2015 by the Society for Personality and Social Psychology, Inc.

  3. Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation

    Directory of Open Access Journals (Sweden)

    Min Yan

    2016-07-01

    Full Text Available This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17 model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST sensitivity analysis. Then the optimized MOD_17 model was used to calibrate the Biome-BGC model by adjusting the sensitive ecophysiological parameters. Once the best match was found for the 10 selected forest plots for the 8-day GPP estimates from the optimized MOD_17 and from the Biome-BGC, the values of sensitive ecophysiological parameters were determined. The calibrated Biome-BGC model agreed better with the eddy covariance (EC measurements (R2 = 0.87, RMSE = 1.583 gC·m−2·d−1 than the original model did (R2 = 0.72, RMSE = 2.419 gC·m−2·d−1. To provide a best estimate of the true state of the model, the Ensemble Kalman Filter (EnKF was used to assimilate five years (of eight-day periods between 2003 and 2007 of Global LAnd Surface Satellite (GLASS LAI products into the calibrated Biome-BGC model. The results indicated that LAI simulated through the assimilated Biome-BGC agreed well with GLASS LAI. GPP performances obtained from the assimilated Biome-BGC were further improved and verified by EC measurements at the Changbai Mountains forest flux site (R2 = 0.92, RMSE = 1.261 gC·m−2·d−1.

  4. Energy system investment model incorporating heat pumps with thermal storage in buildings and buffer tanks

    International Nuclear Information System (INIS)

    Hedegaard, Karsten; Balyk, Olexandr

    2013-01-01

    Individual compression heat pumps constitute a potentially valuable resource in supporting wind power integration due to their economic competitiveness and possibilities for flexible operation. When analysing the system benefits of flexible heat pump operation, effects on investments should be taken into account. In this study, we present a model that facilitates analysing individual heat pumps and complementing heat storages in integration with the energy system, while optimising both investments and operation. The model incorporates thermal building dynamics and covers various heat storage options: passive heat storage in the building structure via radiator heating, active heat storage in concrete floors via floor heating, and use of thermal storage tanks for space heating and hot water. It is shown that the model is well qualified for analysing possibilities and system benefits of operating heat pumps flexibly. This includes prioritising heat pump operation for hours with low marginal electricity production costs, and peak load shaving resulting in a reduced need for peak and reserve capacity investments. - Highlights: • Model optimising heat pumps and heat storages in integration with the energy system. • Optimisation of both energy system investments and operation. • Heat storage in building structure and thermal storage tanks included. • Model well qualified for analysing system benefits of flexible heat pump operation. • Covers peak load shaving and operation prioritised for low electricity prices

  5. Incorporating social groups' responses in a descriptive model for second- and higher-order impact identification

    International Nuclear Information System (INIS)

    Sutheerawatthana, Pitch; Minato, Takayuki

    2010-01-01

    The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and then generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).

  6. Incorporating plant fossil data into species distribution models is not straightforward: Pitfalls and possible solutions

    Science.gov (United States)

    Moreno-Amat, Elena; Rubiales, Juan Manuel; Morales-Molino, César; García-Amorena, Ignacio

    2017-08-01

    The increasing development of species distribution models (SDMs) using palaeodata has created new prospects to address questions of evolution, ecology and biogeography from wider perspectives. Palaeobotanical data provide information on the past distribution of taxa at a given time and place and its incorporation on modelling has contributed to advancing the SDM field. This has allowed, for example, to calibrate models under past climate conditions or to validate projected models calibrated on current species distributions. However, these data also bear certain shortcomings when used in SDMs that may hinder the resulting ecological outcomes and eventually lead to misleading conclusions. Palaeodata may not be equivalent to present data, but instead frequently exhibit limitations and biases regarding species representation, taxonomy and chronological control, and their inclusion in SDMs should be carefully assessed. The limitations of palaeobotanical data applied to SDM studies are infrequently discussed and often neglected in the modelling literature; thus, we argue for the more careful selection and control of these data. We encourage authors to use palaeobotanical data in their SDMs studies and for doing so, we propose some recommendations to improve the robustness, reliability and significance of palaeo-SDM analyses.

  7. Incorporating teleconnection information into reservoir operating policies using Stochastic Dynamic Programming and a Hidden Markov Model

    Science.gov (United States)

    Turner, Sean; Galelli, Stefano; Wilcox, Karen

    2015-04-01

    Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating

  8. A generalized linear-quadratic model incorporating reciprocal time pattern of radiation damage repair

    International Nuclear Information System (INIS)

    Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta

    2012-01-01

    Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.

  9. Using item response theory to investigate the structure of anticipated affect: do self-reports about future affective reactions conform to typical or maximal models?

    OpenAIRE

    Zampetakis, Leonidas A.; Lerakis, Manolis; Kafetsios, Konstantinos; Moustakis, Vassilis

    2015-01-01

    In the present research we used item response theory (IRT) to examine whether effective predictions (anticipated affect) conforms to a typical (i.e., what people usually do) or a maximal behavior process (i.e., what people can do). The former, correspond to non-monotonic ideal point IRT models whereas the latter correspond to monotonic dominance IRT models. A convenience, cross-sectional student sample (N=1624) was used. Participants were asked to report on anticipated positive and negative a...

  10. Petroacoustic Modelling of Heterolithic Sandstone Reservoirs: A Novel Approach to Gassmann Modelling Incorporating Sedimentological Constraints and NMR Porosity data

    Science.gov (United States)

    Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.

    2012-12-01

    Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which

  11. A neural population model incorporating dopaminergic neurotransmission during complex voluntary behaviors.

    Directory of Open Access Journals (Sweden)

    Stefan Fürtinger

    2014-11-01

    Full Text Available Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number

  12. Incorporating Single-nucleotide Polymorphisms Into the Lyman Model to Improve Prediction of Radiation Pneumonitis

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, Susan L., E-mail: sltucker@mdanderson.org [Department of Bioinformatics and Computational Biology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li Minghuan [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Xu Ting; Gomez, Daniel [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yuan Xianglin [Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan (China); Yu Jinming [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Liu Zhensheng; Yin Ming; Guan Xiaoxiang; Wang Lie; Wei Qingyi [Department of Epidemiology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mohan, Radhe [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Vinogradskiy, Yevgeniy [University of Colorado School of Medicine, Aurora, Colorado (United States); Martel, Mary [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao Zhongxing [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2013-01-01

    Purpose: To determine whether single-nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor-{beta}, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials: Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGF{beta}, TNF{alpha}, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiation therapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (grade {>=}3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results: Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGF{beta}, VEGF, TNF{alpha}, XRCC1 and APEX1. With smoking status included in the multivariate model, the SNPs significantly associated with increased risk of RP were in genes for TGF{beta}, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions: This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk vs <10% risk of RP when they were exposed to high MLDs.

  13. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat

  14. Development of a prototype mesoscale computer model incorporating treatment of topography

    International Nuclear Information System (INIS)

    Apsimon, H.; Kitson, K.; Fawcett, M.; Goddard, A.J.H.

    1984-01-01

    Models are available for simulating dispersal of accidental releases, using mass-consistent wind-fields and accounting for site-specific topography. These techniques were examined critically to see if they might be improved, and to assess their limitations. An improved model, windfield adjusted for topography (WAFT), was developed (with advantages over MATHEW used in the Atmospheric Release Advisory Capability - ARAC system). To simulate dispersion in the windfields produced by WAFT and calculate time integrated air concentrations and dry and wet deposition the TOMCATS model was developed. It treats the release as an assembly of pseudo-particles using Monte Carlo techniques to simulate turbulent displacements. It allows for larger eddy effects in the horizontal turbulence spectrum. Wet deposition is calculated using inhomogeneous rainfields evolving in time and space. The models were assessed, applying them to hypothetical releases in complex terrain, using typical data applicable in accident conditions, and undertaking sensitivity studies. One finds considerable uncertainty in results produced by these models. Although useful for post-facto analysis, such limitations cast doubt on their advantages, relative to simpler techniques, during an actual emergency

  15. Research on Soft Reduction Amount Distribution to Eliminate Typical Inter-dendritic Crack in Continuous Casting Slab of X70 Pipeline Steel by Numerical Model

    Science.gov (United States)

    Liu, Ke; Wang, Chang; Liu, Guo-liang; Ding, Ning; Sun, Qi-song; Tian, Zhi-hong

    2017-04-01

    To investigate the formation of one kind of typical inter-dendritic crack around triple point region in continuous casting(CC) slab during the operation of soft reduction, fully coupled 3D thermo-mechanical finite element models was developed, also plant trials were carried out in a domestic continuous casting machine. Three possible types of soft reduction amount distribution (SRAD) in the soft reduction region were analyzed. The relationship between the typical inter-dendritic cracks and soft reduction conditions is presented and demonstrated in production practice. Considering the critical strain of internal crack formation, a critical tolerance for the soft reduction amount distribution and related casing parameters have been proposed for better contribution of soft reduction to the internal quality of slabs. The typical inter-dendritic crack around the triple point region had been eliminated effectively through the application of proposed suggestions for continuous casting of X70 pipeline steel in industrial practice.

  16. Incorporation of defects into the central atoms model of a metallic glass

    International Nuclear Information System (INIS)

    Lass, Eric A.; Zhu Aiwu; Shiflet, G.J.; Joseph Poon, S.

    2011-01-01

    The central atoms model (CAM) of a metallic glass is extended to incorporate thermodynamically stable defects, similar to vacancies in a crystalline solid, within the amorphous structure. A bond deficiency (BD), which is the proposed defect present in all metallic glasses, is introduced into the CAM equations. Like vacancies in a crystalline solid, BDs are thermodynamically stable entities because of the increase in entropy associated with their creation, and there is an equilibrium concentration present in the glassy phase. When applied to Cu-Zr and Ni-Zr binary metallic glasses, the concentration of thermally induced BDs surrounding Zr atoms reaches a relatively constant value at the glass transition temperature, regardless of composition within a given glass system. Using this 'critical' defect concentration, the predicted temperatures at which the glass transition is expected to occur are in good agreement with the experimentally determined glass transition temperatures for both alloy systems.

  17. A Microdosimetric-Kinetic Model of Cell Killing by Irradiation from Permanently Incorporated Radionuclides.

    Science.gov (United States)

    Hawkins, Roland B

    2018-01-01

    An expression for the surviving fraction of a replicating population of cells exposed to permanently incorporated radionuclide is derived from the microdosimetric-kinetic model. It includes dependency on total implant dose, linear energy transfer (LET), decay rate of the radionuclide, the repair rate of potentially lethal lesions in DNA and the volume doubling time of the target population. This is used to obtain an expression for the biologically effective dose ( BED α / β ) based on the minimum survival achieved by the implant that is equivalent to, and can be compared and combined with, the BED α / β calculated for a fractionated course of radiation treatment. Approximate relationships are presented that are useful in the calculation of BED α / β for alpha- or beta-emitting radionuclides with half-life significantly greater than, or nearly equal to, the approximately 1-h repair half-life of radiation-induced potentially lethal lesions.

  18. Incorporation of Hydrogen Bond Angle Dependency into the Generalized Solvation Free Energy Density Model.

    Science.gov (United States)

    Ma, Songling; Hwang, Sungbo; Lee, Sehan; Acree, William E; No, Kyoung Tai

    2018-04-23

    To describe the physically realistic solvation free energy surface of a molecule in a solvent, a generalized version of the solvation free energy density (G-SFED) calculation method has been developed. In the G-SFED model, the contribution from the hydrogen bond (HB) between a solute and a solvent to the solvation free energy was calculated as the product of the acidity of the donor and the basicity of the acceptor of an HB pair. The acidity and basicity parameters of a solute were derived using the summation of acidities and basicities of the respective acidic and basic functional groups of the solute, and that of the solvent was experimentally determined. Although the contribution of HBs to the solvation free energy could be evenly distributed to grid points on the surface of a molecule, the G-SFED model was still inadequate to describe the angle dependency of the HB of a solute with a polarizable continuum solvent. To overcome this shortcoming of the G-SFED model, the contribution of HBs was formulated using the geometric parameters of the grid points described in the HB coordinate system of the solute. We propose an HB angle dependency incorporated into the G-SFED model, i.e., the G-SFED-HB model, where the angular-dependent acidity and basicity densities are defined and parametrized with experimental data. The G-SFED-HB model was then applied to calculate the solvation free energies of organic molecules in water, various alcohols and ethers, and the log P values of diverse organic molecules, including peptides and a protein. Both the G-SFED model and the G-SFED-HB model reproduced the experimental solvation free energies with similar accuracy, whereas the distributions of the SFED on the molecular surface calculated by the G-SFED and G-SFED-HB models were quite different, especially for molecules having HB donors or acceptors. Since the angle dependency of HBs was included in the G-SFED-HB model, the SFED distribution of the G-SFED-HB model is well described

  19. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    Science.gov (United States)

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A Non-Isothermal Chemical Lattice Boltzmann Model Incorporating Thermal Reaction Kinetics and Enthalpy Changes

    Directory of Open Access Journals (Sweden)

    Stuart Bartlett

    2017-08-01

    Full Text Available The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.

  1. Incorporation of a Wind Generator Model into a Dynamic Power Flow Analysis

    Directory of Open Access Journals (Sweden)

    Angeles-Camacho C.

    2011-07-01

    Full Text Available Wind energy is nowadays one of the most cost-effective and practical options for electric generation from renewable resources. However, increased penetration of wind generation causes the power networks to be more depend on, and vulnerable to, the varying wind speed. Modeling is a tool which can provide valuable information about the interaction between wind farms and the power network to which they are connected. This paper develops a realistic characterization of a wind generator. The wind generator model is incorporated into an algorithm to investigate its contribution to the stability of the power network in the time domain. The tool obtained is termed dynamic power flow. The wind generator model takes on account the wind speed and the reactive power consumption by induction generators. Dynamic power flow analysis is carried-out using real wind data at 10-minute time intervals collected for one meteorological station. The generation injected at one point into the network provides active power locally and is found to reduce global power losses. However, the power supplied is time-varying and causes fluctuations in voltage magnitude and power fl ows in transmission lines.

  2. HLA-B*39:06 Efficiently Mediates Type 1 Diabetes in a Mouse Model Incorporating Reduced Thymic Insulin Expression.

    Science.gov (United States)

    Schloss, Jennifer; Ali, Riyasat; Racine, Jeremy J; Chapman, Harold D; Serreze, David V; DiLorenzo, Teresa P

    2018-04-09

    Type 1 diabetes (T1D) is characterized by T cell-mediated destruction of the insulin-producing β cells of the pancreatic islets. Among the loci associated with T1D risk, those most predisposing are found in the MHC region. HLA-B*39:06 is the most predisposing class I MHC allele and is associated with an early age of onset. To establish an NOD mouse model for the study of HLA-B*39:06, we expressed it in the absence of murine class I MHC. HLA-B*39:06 was able to mediate the development of CD8 T cells, support lymphocytic infiltration of the islets, and confer T1D susceptibility. Because reduced thymic insulin expression is associated with impaired immunological tolerance to insulin and increased T1D risk in patients, we incorporated this in our model as well, finding that HLA-B*39:06-transgenic NOD mice with reduced thymic insulin expression have an earlier age of disease onset and a higher overall prevalence as compared with littermates with typical thymic insulin expression. This was despite virtually indistinguishable blood insulin levels, T cell subset percentages, and TCR Vβ family usage, confirming that reduced thymic insulin expression does not impact T cell development on a global scale. Rather, it will facilitate the thymic escape of insulin-reactive HLA-B*39:06-restricted T cells, which participate in β cell destruction. We also found that in mice expressing either HLA-B*39:06 or HLA-A*02:01 in the absence of murine class I MHC, HLA transgene identity alters TCR Vβ usage by CD8 T cells, demonstrating that some TCR Vβ families have a preference for particular class I MHC alleles. Copyright © 2018 by The American Association of Immunologists, Inc.

  3. Incorporating an extended dendritic growth model into the CAFE model for rapidly solidified non-dilute alloys

    International Nuclear Information System (INIS)

    Ma, Jie; Wang, Bo; Zhao, Shunli; Wu, Guangxin; Zhang, Jieyu; Yang, Zhiliang

    2016-01-01

    We have extended the dendritic growth model first proposed by Boettinger, Coriell and Trivedi (here termed EBCT) for microstructure simulations of rapidly solidified non-dilute alloys. The temperature-dependent distribution coefficient, obtained from calculations of phase equilibria, and the continuous growth model (CGM) were adopted in the present EBCT model to describe the solute trapping behaviors. The temperature dependence of the physical properties, which were not used in previous dendritic growth models, were also considered in the present EBCT model. These extensions allow the present EBCT model to be used for microstructure simulations of non-dilute alloys. The comparison of the present EBCT model with the BCT model proves that the considerations of the distribution coefficient and physical properties are necessary for microstructure simulations, especially for small particles with high undercoolings. Finally, the EBCT model was incorporated into the cellular automaton-finite element (CAFE) model to simulate microstructures of gas-atomized ASP30 high speed steel particles that were then compared with experimental results. Both the simulated and experimental results reveal that a columnar dendritic microstructure preferentially forms in small particles and an equiaxed microstructure forms otherwise. The applications of the present EBCT model provide a convenient way to predict the microstructure of non-dilute alloys. - Highlights: • A dendritic growth model was developed considering non-equilibrium distribution coefficient. • The physical properties with temperature dependence were considered in the extended model. • The extended model can be used to non-dilute alloys and the extensions are necessary in small particles. • Microstructure of ASP30 steel was investigated using the present model and verified by experiment.

  4. Incorporating an extended dendritic growth model into the CAFE model for rapidly solidified non-dilute alloys

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Jie; Wang, Bo [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhao, Shunli [Research Institute, Baoshan Iron & Steel Co., Ltd, Shanghai 201900 (China); Wu, Guangxin [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhang, Jieyu, E-mail: zjy6162@staff.shu.edu.cn [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Yang, Zhiliang [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China)

    2016-05-25

    We have extended the dendritic growth model first proposed by Boettinger, Coriell and Trivedi (here termed EBCT) for microstructure simulations of rapidly solidified non-dilute alloys. The temperature-dependent distribution coefficient, obtained from calculations of phase equilibria, and the continuous growth model (CGM) were adopted in the present EBCT model to describe the solute trapping behaviors. The temperature dependence of the physical properties, which were not used in previous dendritic growth models, were also considered in the present EBCT model. These extensions allow the present EBCT model to be used for microstructure simulations of non-dilute alloys. The comparison of the present EBCT model with the BCT model proves that the considerations of the distribution coefficient and physical properties are necessary for microstructure simulations, especially for small particles with high undercoolings. Finally, the EBCT model was incorporated into the cellular automaton-finite element (CAFE) model to simulate microstructures of gas-atomized ASP30 high speed steel particles that were then compared with experimental results. Both the simulated and experimental results reveal that a columnar dendritic microstructure preferentially forms in small particles and an equiaxed microstructure forms otherwise. The applications of the present EBCT model provide a convenient way to predict the microstructure of non-dilute alloys. - Highlights: • A dendritic growth model was developed considering non-equilibrium distribution coefficient. • The physical properties with temperature dependence were considered in the extended model. • The extended model can be used to non-dilute alloys and the extensions are necessary in small particles. • Microstructure of ASP30 steel was investigated using the present model and verified by experiment.

  5. Experimental simulation and numerical modeling of vapor shield formation and divertor material erosion for ITER typical plasma disruptions

    International Nuclear Information System (INIS)

    Wuerz, H.; Arkhipov, N.I.; Bakhtin, V.P.; Konkashbaev, I.; Landman, I.; Safronov, V.M.; Toporkov, D.A.; Zhitlukhin, A.M.

    1995-01-01

    The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and are experimentally analyzed at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. ((orig.))

  6. Numerical modeling and experimental simulation of vapor shield formation and divertor material erosion for ITER typical plasma disruptions

    International Nuclear Information System (INIS)

    Wuerz, H.; Arkhipov, N.I.; Bakhin, V.P.; Goel, B.; Hoebel, W.; Konkashbaev, I.; Landman, I.; Piazza, G.; Safronov, V.M.; Sherbakov, A.R.; Toporkov, D.A.; Zhitlukhin, A.M.

    1994-01-01

    The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and experimentally investigated at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. In the optical wavelength range C II, C III, C IV emission lines for graphite, Cu I, Cu II lines for copper and continuum radiation for tungsten samples are observed in the target plasma. The plasma expands along the magnetic field lines with velocities of (4±1)x10 6 cm/s for graphite and 10 5 cm/s for copper. Modeling was done with a radiation hydrodynamics code in one-dimensional planar geometry. The multifrequency radiation transport is treated in flux limited diffusion and in forward reverse transport approximation. In these first modeling studies the overall shielding efficiency for carbon and tungsten defined as ratio of the incident energy and the vaporization energy for power densities of 10 MW/cm 2 exceeds a factor of 30. The vapor shield is established within 2 μs, the power fraction to the target after 10 μs is below 3% and reaches in the stationary state after about 20 μs a value of around 1.5%. ((orig.))

  7. Incorporating H2 Dynamics and Inhibition into a Microbially Based Methanogenesis Model for Restored Wetland Sediments

    Science.gov (United States)

    Pal, David; Jaffe, Peter

    2015-04-01

    Estimates of global CH4 emissions from wetlands indicate that wetlands are the largest natural source of CH4 to the atmosphere. In this paper, we propose that there is a missing component to these models that should be addressed. CH4 is produced in wetland sediments from the microbial degradation of organic carbon through multiple fermentation steps and methanogenesis pathways. There are multiple sources of carbon for methananogenesis; in vegetated wetland sediments, microbial communities consume root exudates as a major source of organic carbon. In many methane models propionate is used as a model carbon molecule. This simple sugar is fermented into acetate and H2, acetate is transformed to methane and CO2, while the H2 and CO2 are used to form an additional CH4 molecule. The hydrogenotrophic pathway involves the equilibrium of two dissolved gases, CH4 and H2. In an effort to limit CH4 emissions from wetlands, there has been growing interest in finding ways to limit plant transport of soil gases through root systems. Changing planted species, or genetically modifying new species of plants may control this transport of soil gases. While this may decrease the direct emissions of methane, there is little understanding about how H2 dynamics may feedback into overall methane production. The results of an incubation study were combined with a new model of propionate degradation for methanogenesis that also examines other natural parameters (i.e. gas transport through plants). This presentation examines how we would expect this model to behave in a natural field setting with changing sulfate and carbon loading schemes. These changes can be controlled through new plant species and other management practices. Next, we compare the behavior of two variations of this model, with or without the incorporation of H2 interactions, with changing sulfate, carbon loading and root volatilization. Results show that while the models behave similarly there may be a discrepancy of nearly

  8. Estimating disperser abundance using open population models that incorporate data from continuous detection PIT arrays

    Science.gov (United States)

    Dzul, Maria C.; Yackulic, Charles B.; Korman, Josh

    2017-01-01

    Autonomous passive integrated transponder (PIT) tag antenna systems continuously detect individually marked organisms at one or more fixed points over long time periods. Estimating abundance using data from autonomous antennae can be challenging, because these systems do not detect unmarked individuals. Here we pair PIT antennae data from a tributary with mark-recapture sampling data in a mainstem river to estimate the number of fish moving from the mainstem to the tributary. We then use our model to estimate abundance of non-native rainbow trout Oncorhynchus mykiss that move from the Colorado River to the Little Colorado River (LCR), the latter of which is important spawning and rearing habitat for federally-endangered humpback chub Gila cypha. We estimate 226 rainbow trout (95% CI: 127-370) entered the LCR from October 2013-April 2014. We discuss the challenges of incorporating detections from autonomous PIT antenna systems into mark-recapture population models, particularly in regards to using information about spatial location to estimate movement and detection probabilities.

  9. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  10. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  11. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  12. Exciton delocalization incorporated drift-diffusion model for bulk-heterojunction organic solar cells

    Science.gov (United States)

    Wang, Zi Shuai; Sha, Wei E. I.; Choy, Wallace C. H.

    2016-12-01

    Modeling the charge-generation process is highly important to understand device physics and optimize power conversion efficiency of bulk-heterojunction organic solar cells (OSCs). Free carriers are generated by both ultrafast exciton delocalization and slow exciton diffusion and dissociation at the heterojunction interface. In this work, we developed a systematic numerical simulation to describe the charge-generation process by a modified drift-diffusion model. The transport, recombination, and collection of free carriers are incorporated to fully capture the device response. The theoretical results match well with the state-of-the-art high-performance organic solar cells. It is demonstrated that the increase of exciton delocalization ratio reduces the energy loss in the exciton diffusion-dissociation process, and thus, significantly improves the device efficiency, especially for the short-circuit current. By changing the exciton delocalization ratio, OSC performances are comprehensively investigated under the conditions of short-circuit and open-circuit. Particularly, bulk recombination dependent fill factor saturation is unveiled and understood. As a fundamental electrical analysis of the delocalization mechanism, our work is important to understand and optimize the high-performance OSCs.

  13. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Science.gov (United States)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  14. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    Directory of Open Access Journals (Sweden)

    T. R. Khan

    2017-10-01

    µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp  =  10 µm, relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  15. Spinal motor control system incorporates an internal model of limb dynamics.

    Science.gov (United States)

    Shimansky, Y P

    2000-10-01

    The existence and utilization of an internal representation of the controlled object is one of the most important features of the functioning of neural motor control systems. This study demonstrates that this property already exists at the level of the spinal motor control system (SMCS), which is capable of generating motor patterns for reflex rhythmic movements, such as locomotion and scratching, without the aid of the peripheral afferent feedback, but substantially modifies the generated activity in response to peripheral afferent stimuli. The SMCS is presented as an optimal control system whose optimality requires that it incorporate an internal model (IM) of the controlled object's dynamics. A novel functional mechanism for the integration of peripheral sensory signals with the corresponding predictive output from the IM, the summation of information precision (SIP) is proposed. In contrast to other models in which the correction of the internal representation of the controlled object's state is based on the calculation of a mismatch between the internal and external information sources, the SIP mechanism merges the information from these sources in order to optimize the precision of the controlled object's state estimate. It is demonstrated, based on scratching in decerebrate cats as an example of the spinal control of goal-directed movements, that the results of computer modeling agree with the experimental observations related to the SMCS's reactions to phasic and tonic peripheral afferent stimuli. It is also shown that the functional requirements imposed by the mathematical model of the SMCS comply with the current knowledge about the related properties of spinal neuronal circuitry. The crucial role of the spinal presynaptic inhibition mechanism in the neuronal implementation of SIP is elucidated. Important differences between the IM and a state predictor employed for compensating for a neural reflex time delay are discussed.

  16. Using item response theory to investigate the structure of anticipated affect: do self-reports about future affective reactions conform to typical or maximal models?

    Science.gov (United States)

    Zampetakis, Leonidas A; Lerakis, Manolis; Kafetsios, Konstantinos; Moustakis, Vassilis

    2015-01-01

    In the present research, we used item response theory (IRT) to examine whether effective predictions (anticipated affect) conforms to a typical (i.e., what people usually do) or a maximal behavior process (i.e., what people can do). The former, correspond to non-monotonic ideal point IRT models, whereas the latter correspond to monotonic dominance IRT models. A convenience, cross-sectional student sample (N = 1624) was used. Participants were asked to report on anticipated positive and negative affect around a hypothetical event (emotions surrounding the start of a new business). We carried out analysis comparing graded response model (GRM), a dominance IRT model, against generalized graded unfolding model, an unfolding IRT model. We found that the GRM provided a better fit to the data. Findings suggest that the self-report responses to anticipated affect conform to dominance response process (i.e., maximal behavior). The paper also discusses implications for a growing literature on anticipated affect.

  17. Incorporating Geochemical And Microbial Kinetics In Reactive Transport Models For Generation Of Acid Rock Drainage

    Science.gov (United States)

    Andre, B. J.; Rajaram, H.; Silverstein, J.

    2010-12-01

    diffusion model at the scale of a single rock is developed incorporating the proposed kinetic rate expressions. Simulations of initiation, washout and AMD flows are discussed to gain a better understanding of the role of porosity, effective diffusivity and reactive surface area in generating AMD. Simulations indicate that flow boundary conditions control generation of acid rock drainage as porosity increases.

  18. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He

    2014-04-01

    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  19. Incorporating Prognostic Marine Nitrogen Fixers and Related Bio-Physical Feedbacks in an Earth System Model

    Science.gov (United States)

    Paulsen, H.; Ilyina, T.; Six, K. D.

    2016-02-01

    Marine nitrogen fixers play a fundamental role in the oceanic nitrogen and carbon cycles by providing a major source of `new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Furthermore, nitrogen fixers may regionally have a direct impact on ocean physics and hence the climate system as they form extensive surface mats which can increase light absorption and surface albedo and reduce the momentum input by wind. Resulting alterations in temperature and stratification may feed back on nitrogen fixers' growth itself.We incorporate nitrogen fixers as a prognostic 3D tracer in the ocean biogeochemical component (HAMOCC) of the Max Planck Institute Earth system model and assess for the first time the impact of related bio-physical feedbacks on biogeochemistry and the climate system.The model successfully reproduces recent estimates of global nitrogen fixation rates, as well as the observed distribution of nitrogen fixers, covering large parts of the tropical and subtropical oceans. First results indicate that including bio-physical feedbacks has considerable effects on the upper ocean physics in this region. Light absorption by nitrogen fixers leads locally to surface heating, subsurface cooling, and mixed layer depth shoaling in the subtropical gyres. As a result, equatorial upwelling is increased, leading to surface cooling at the equator. This signal is damped by the effect of the reduced wind stress due to the presence of cyanobacteria mats, which causes a reduction in the wind-driven circulation, and hence a reduction in equatorial upwelling. The increase in surface albedo due to nitrogen fixers has only inconsiderable effects. The response of nitrogen fixers' growth to the alterations in temperature and stratification varies regionally. Simulations with the fully coupled Earth system model are in progress to assess the implications of the biologically induced changes in upper ocean physics for the global climate system.

  20. Using a cognitive architecture in educational and recreational games : How to incorporate a model in your App

    NARCIS (Netherlands)

    Taatgen, Niels A.; de Weerd, Harmen; Reitter, David; Ritter, Frank

    2016-01-01

    We present a Swift re-implementation of the ACT-R cognitive architecture, which can be used to quickly build iOS Apps that incorporate an ACT-R model as a core feature. We discuss how this implementation can be used in an example model, and explore the breadth of possibilities by presenting six Apps

  1. Adolescent Decision-Making Processes regarding University Entry: A Model Incorporating Cultural Orientation, Motivation and Occupational Variables

    Science.gov (United States)

    Jung, Jae Yup

    2013-01-01

    This study tested a newly developed model of the cognitive decision-making processes of senior high school students related to university entry. The model incorporated variables derived from motivation theory (i.e. expectancy-value theory and the theory of reasoned action), literature on cultural orientation and occupational considerations. A…

  2. Incorporating genetic variation into a model of budburst phenology of coast Douglas-fir (Pseudotsuga menziesii var

    Science.gov (United States)

    Peter J. Gould; Constance A. Harrington; Bradley J. St Clair

    2011-01-01

    Models to predict budburst and other phenological events in plants are needed to forecast how climate change may impact ecosystems and for the development of mitigation strategies. Differences among genotypes are important to predicting phenological events in species that show strong clinal variation in adaptive traits. We present a model that incorporates the effects...

  3. A diagnostic model incorporating P50 sensory gating and neuropsychological tests for schizophrenia.

    Directory of Open Access Journals (Sweden)

    Jia-Chi Shan

    Full Text Available OBJECTIVES: Endophenotypes in schizophrenia research is a contemporary approach to studying this heterogeneous mental illness, and several candidate neurophysiological markers (e.g. P50 sensory gating and neuropsychological tests (e.g. Continuous Performance Test (CPT and Wisconsin Card Sorting Test (WCST have been proposed. However, the clinical utility of a single marker appears to be limited. In the present study, we aimed to construct a diagnostic model incorporating P50 sensory gating with other neuropsychological tests in order to improve the clinical utility. METHODS: We recruited clinically stable outpatients meeting DSM-IV criteria of schizophrenia and age- and gender-matched healthy controls. Participants underwent P50 sensory gating experimental sessions and batteries of neuropsychological tests, including CPT, WCST and Wechsler Adult Intelligence Scale Third Edition (WAIS-III. RESULTS: A total of 106 schizophrenia patients and 74 healthy controls were enrolled. Compared with healthy controls, the patient group had significantly a larger S2 amplitude, and thus poorer P50 gating ratio (gating ratio = S2/S1. In addition, schizophrenia patients had a poorer performance on neuropsychological tests. We then developed a diagnostic model by using multivariable logistic regression analysis to differentiate patients from healthy controls. The final model included the following covariates: abnormal P50 gating (defined as P50 gating ratio >0.4, three subscales derived from the WAIS-III (Arithmetic, Block Design, and Performance IQ, sensitivity index from CPT and smoking status. This model had an adequate accuracy (concordant percentage = 90.4%; c-statistic = 0.904; Hosmer-Lemeshow Goodness-of-Fit Test, p = 0.64>0.05. CONCLUSION: To the best of our knowledge, this is the largest study to date using P50 sensory gating in subjects of Chinese ethnicity and the first to use P50 sensory gating along with other neuropsychological tests

  4. Incorporation of Fine-Grained Sediment Erodibility Measurements into Sediment Transport Modeling, Capitol Lake, Washington

    Science.gov (United States)

    Stevens, Andrew W.; Gelfenbaum, Guy; Elias, Edwin; Jones, Craig

    2008-01-01

    lab with Sedflume, an apparatus for measuring sediment erosion-parameters. In this report, we present results of the characterization of fine-grained sediment erodibility within Capitol Lake. The erodibility data were incorporated into the previously developed hydrodynamic and sediment transport model. Model simulations using the measured erodibility parameters were conducted to provide more robust estimates of the overall magnitudes and spatial patterns of sediment transport resulting from restoration of the Deschutes Estuary.

  5. Incorporating human-water dynamics in a hyper-resolution land surface model

    Science.gov (United States)

    Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.

    2017-12-01

    The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in

  6. Incorporation of GRACE Data into a Bayesian Model for Groundwater Drought Monitoring

    Science.gov (United States)

    Slinski, K.; Hogue, T. S.; McCray, J. E.; Porter, A.

    2015-12-01

    Groundwater drought, defined as the sustained occurrence of below average availability of groundwater, is marked by below average water levels in aquifers and reduced flows to groundwater-fed rivers and wetlands. The impact of groundwater drought on ecosystems, agriculture, municipal water supply, and the energy sector is an increasingly important global issue. However, current drought monitors heavily rely on precipitation and vegetative stress indices to characterize the timing, duration, and severity of drought events. The paucity of in situ observations of aquifer levels is a substantial obstacle to the development of systems to monitor groundwater drought in drought-prone areas, particularly in developing countries. Observations from the NASA/German Space Agency's Gravity Recovery and Climate Experiment (GRACE) have been used to estimate changes in groundwater storage over areas with sparse point measurements. This study incorporates GRACE total water storage observations into a Bayesian framework to assess the performance of a probabilistic model for monitoring groundwater drought based on remote sensing data. Overall, it is hoped that these methods will improve global drought preparedness and risk reduction by providing information on groundwater drought necessary to manage its impacts on ecosystems, as well as on the agricultural, municipal, and energy sectors.

  7. Incorporating organizational factors into probabilistic safety assessment of nuclear power plants through canonical probabilistic models

    Energy Technology Data Exchange (ETDEWEB)

    Galan, S.F. [Dpto. de Inteligencia Artificial, E.T.S.I. Informatica (UNED), Juan del Rosal, 16, 28040 Madrid (Spain)]. E-mail: seve@dia.uned.es; Mosleh, A. [2100A Marie Mount Hall, Materials and Nuclear Engineering Department, University of Maryland, College Park, MD 20742 (United States)]. E-mail: mosleh@umd.edu; Izquierdo, J.M. [Area de Modelado y Simulacion, Consejo de Seguridad Nuclear, Justo Dorado, 11, 28040 Madrid (Spain)]. E-mail: jmir@csn.es

    2007-08-15

    The {omega}-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the {omega}-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the {omega}-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents.

  8. Incorporating organizational factors into probabilistic safety assessment of nuclear power plants through canonical probabilistic models

    International Nuclear Information System (INIS)

    Galan, S.F.; Mosleh, A.; Izquierdo, J.M.

    2007-01-01

    The ω-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the ω-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the ω-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents

  9. An evaluation of a paediatric radiation oncology teaching programme incorporating a SCORPIO teaching model.

    Science.gov (United States)

    Ahern, Verity; Klein, Linda; Bentvelzen, Adam; Garlan, Karen; Jeffery, Heather

    2011-04-01

    Many radiation oncology registrars have no exposure to paediatrics during their training. To address this, the Paediatric Special Interest Group of the Royal Australian and New Zealand College of Radiologists has convened a biennial teaching course since 1997. The 2009 course incorporated the use of a Structured, Clinical, Objective-Referenced, Problem-orientated, Integrated and Organized (SCORPIO) teaching model for small group tutorials. This study evaluates whether the paediatric radiation oncology curriculum can be adapted to the SCORPIO teaching model and to evaluate the revised course from the registrars' perspective. Teaching and learning resources included a pre-course reading list, a lecture series programme and a SCORPIO workshop. Three evaluation instruments were developed: an overall Course Evaluation Survey for all participants, a SCORPIO Workshop Survey for registrars and a Teacher's SCORPIO Workshop Survey. Forty-five radiation oncology registrars, 14 radiation therapists and five paediatric oncology registrars attended. Seventy-three per cent (47/64) of all participants completed the Course Evaluation Survey and 95% (38/40) of registrars completed the SCORPIO Workshop Survey. All teachers completed the Teacher's SCORPIO Survey (10/10). The overall educational experience was rated as good or excellent by 93% (43/47) of respondents. Ratings of satisfaction with lecture sessions were predominantly good or excellent. Registrars gave the SCORPIO workshop high ratings on each of 10 aspects of quality, with 82% allocating an excellent rating overall for the SCORPIO activity. Both registrars and teachers recommended more time for the SCORPIO stations. The 2009 course met the educational needs of the radiation oncology registrars and the SCORPIO workshop was a highly valued educational component. © 2011 The Authors. Journal of Medical Imaging and Radiation Oncology © 2011 The Royal Australian and New Zealand College of Radiologists.

  10. Utility-maximizing model of household time use for independent, shared, and allocated activities incorporating group decision mechanisms

    NARCIS (Netherlands)

    Zhang, J.; Timmermans, H.J.P.; Borgers, A.W.J.

    2002-01-01

    Existing activity-based models of transport demand typically assume an individual decision-making process. The focus on theories of individual decision making may be partially due to the lack of behaviorally oriented modeling methodologies for group decision making. Therefore, an attempt has been

  11. Global dynamics of a PDE model for aedes aegypti mosquitoe incorporating female sexual preference

    KAUST Repository

    Parshad, Rana; Agusto, Folashade B.

    2011-01-01

    In this paper we study the long time dynamics of a reaction diffusion system, describing the spread of Aedes aegypti mosquitoes, which are the primary cause of dengue infection. The system incorporates a control attempt via the sterile insect

  12. Incorporating classic adsorption isotherms into modern surface complexation models: implications for sorption of radionuclides

    International Nuclear Information System (INIS)

    Kulik, D.A.

    2005-01-01

    Full text of publication follows: Computer-aided surface complexation models (SCM) tend to replace the classic adsorption isotherm (AI) analysis in describing mineral-water interface reactions such as radionuclide sorption onto (hydr) oxides and clays. Any site-binding SCM based on the mole balance of surface sites, in fact, reproduces the (competitive) Langmuir isotherm, optionally amended with electrostatic Coulomb's non-ideal term. In most SCM implementations, it is difficult to incorporate real-surface phenomena (site heterogeneity, lateral interactions, surface condensation) described in classic AI approaches other than Langmuir's. Thermodynamic relations between SCMs and AIs that remained obscure in the past have been recently clarified using new definitions of standard and reference states of surface species [1,2]. On this basis, a method for separating the Langmuir AI into ideal (linear) and non-ideal parts [2] was applied to multi-dentate Langmuir, Frumkin, and BET isotherms. The aim of this work was to obtain the surface activity coefficient terms that make the SCM site mole balance constraints obsolete and, in this way, extend thermodynamic SCMs to cover sorption phenomena described by the respective AIs. The multi-dentate Langmuir term accounts for the site saturation with n-dentate surface species, as illustrated on modeling bi-dentate U VI complexes on goethite or SiO 2 surfaces. The Frumkin term corrects for the lateral interactions of the mono-dentate surface species; in particular, it has the same form as the Coulombic term of the constant-capacitance EDL combined with the Langmuir term. The BET term (three parameters) accounts for more than a monolayer adsorption up to the surface condensation; it can potentially describe the surface precipitation of nickel and other cations on hydroxides and clay minerals. All three non-ideal terms (in GEM SCMs implementation [1,2]) by now are used for non-competing surface species only. Upon 'surface dilution

  13. Is our Universe typical?

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.

    1988-01-01

    The problem of typicalness of the Universe - as a dynamical system possessing both regular and chaotic regions of positive measure of phase space, is raised and discussed. Two dynamical systems are considered: 1) The observed Universe as a hierarchy of systems of N graviting bodies; 2) (3+1)-manifold with matter evolving to Wheeler-DeWitt equation in superspace with Hawking boundary condition of compact metrics. It is shown that the observed Universe is typical. There is no unambiguous answer for the second system yet. If it is typical too then the same present state of the Universe could have been originated from an infinite number of different initial conditions the restoration of which is practically impossible at present. 35 refs.; 2 refs

  14. Typical Complexity Numbers

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Typical Complexity Numbers. Say. 1000 tones,; 100 Users,; Transmission every 10 msec. Full Crosstalk cancellation would require. Full cancellation requires a matrix multiplication of order 100*100 for all the tones. 1000*100*100*100 operations every second for the ...

  15. Using item response theory to investigate the structure of anticipated affect: Do self-reports about future affective reactions conform to typical or maximal models?

    Directory of Open Access Journals (Sweden)

    Leonidas A Zampetakis

    2015-09-01

    Full Text Available In the present research we used item response theory (IRT to examine whether effective predictions (anticipated affect conforms to a typical (i.e., what people usually do or a maximal behavior process (i.e., what people can do. The former, correspond to non-monotonic ideal point IRT models whereas the latter correspond to monotonic dominance IRT models. A convenience, cross-sectional student sample (N=1624 was used. Participants were asked to report on anticipated positive and negative affect around a hypothetical event (emotions surrounding the start of a new business. We carried out analysis comparing Graded Response Model (GRM, a dominance IRT model, against Generalized Graded Unfolding Model (GGUM, an unfolding IRT model. We found that the GRM provided a better fit to the data. Findings suggest that the self-report responses to anticipated affect conform to dominance response process (i.e. maximal behavior. The paper also discusses implications for a growing literature on anticipated affect.

  16. Incorporating a Time Horizon in Rate-of-Return Estimations: Discounted Cash Flow Model in Electric Transmission Rate Cases

    International Nuclear Information System (INIS)

    Chatterjee, Bishu; Sharp, Peter A.

    2006-01-01

    Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)

  17. The influence of climatic changes on distribution pattern of six typical Kobresia species in Tibetan Plateau based on MaxEnt model and geographic information system

    Science.gov (United States)

    Hu, Zhongjun; Guo, Ke; Jin, Shulan; Pan, Huahua

    2018-01-01

    The issue that climatic change has great influence on species distribution is currently of great interest in field of biogeography. Six typical Kobresia species are selected from alpine grassland of Tibetan Plateau (TP) as research objects which are the high-quality forage for local husbandry, and their distribution changes are modeled in four periods by using MaxEnt model and GIS technology. The modeling results have shown that the distribution of these six typical Kobresia species in TP was strongly affected by two factors of "the annual precipitation" and "the precipitation in the wettest and driest quarters of the year". The modeling results have also shown that the most suitable habitats of K. pygmeae were located in the area around Qinghai Lake, the Hengduan-Himalayan mountain area, and the hinterland of TP. The most suitable habitats of K. humilis were mainly located in the area around Qinghai Lake and the hinterland of TP during the Last Interglacial period, and gradually merged into a bigger area; K. robusta and K. tibetica were located in the area around Qinghai Lake and the hinterland of TP, but they did not integrate into one area all the time, and K. capillifolia were located in the area around Qinghai Lake and extended to the southwest of the original distributing area, whereas K. macrantha were mainly distributed along the area of the Himalayan mountain chain, which had the smallest distribution area among them, and all these six Kobresia species can be divided into four types of "retreat/expansion" styles according to the changes of suitable habitat areas during the four periods; all these change styles are the result of long-term adaptations of the different species to the local climate changes in regions of TP and show the complexity of relationships between different species and climate. The research results have positive reference value to the protection of species diversity and sustainable development of the local husbandry in TP.

  18. Strategies for Incorporating Women-Specific Sexuality Education into Addiction Treatment Models

    Science.gov (United States)

    James, Raven

    2007-01-01

    This paper advocates for the incorporation of a women-specific sexuality curriculum in the addiction treatment process to aid in sexual healing and provide for aftercare issues. Sexuality in addiction treatment modalities is often approached from a sex-negative stance, or that of sexual victimization. Sexual issues are viewed as addictive in and…

  19. Typicality and reasoning fallacies.

    Science.gov (United States)

    Shafir, E B; Smith, E E; Osherson, D N

    1990-05-01

    The work of Tversky and Kahneman on intuitive probability judgment leads to the following prediction: The judged probability that an instance belongs to a category is an increasing function of the typicality of the instance in the category. To test this prediction, subjects in Experiment 1 read a description of a person (e.g., "Linda is 31, bright, ... outspoken") followed by a category. Some subjects rated how typical the person was of the category, while others rated the probability that the person belonged to that category. For categories like bank teller and feminist bank teller: (1) subjects rated the person as more typical of the conjunctive category (a conjunction effect); (2) subjects rated it more probable that the person belonged to the conjunctive category (a conjunction fallacy); and (3) the magnitudes of the conjunction effect and fallacy were highly correlated. Experiment 2 documents an inclusion fallacy, wherein subjects judge, for example, "All bank tellers are conservative" to be more probable than "All feminist bank tellers are conservative." In Experiment 3, results parallel to those of Experiment 1 were obtained with respect to the inclusion fallacy.

  20. Typicals/Típicos

    Directory of Open Access Journals (Sweden)

    Silvia Vélez

    2004-01-01

    Full Text Available Typicals is a series of 12 colour photographs digitally created from photojournalistic images from Colombia combined with "typical" craft textiles and text from guest writers. Typicals was first exhibited as photographs 50cm x 75cm in size, each with their own magnifying glass, at the Contemporary Art Space at Gorman House in Canberra, Australia, in 2000. It was then exhibited in "Feedback: Art Social Consciousness and Resistance" at Monash University Museum of Art in Melbourne, Australia, from March to May 2003. From May to June 2003 it was exhibited at the Museo de Arte de la Universidad Nacional de Colombia Santa Fé Bogotá, Colombia. In its current manifestation the artwork has been adapted from the catalogue of the museum exhibitions. It is broken up into eight pieces corresponding to the contributions of the writers. The introduction by Sylvia Vélez is the PDF file accessible via a link below this abstract. The other seven PDF files are accessible via the 'Supplementary Files' section to the left of your screen. Please note that these files are around 4 megabytes each, so it may be difficult to access them from a dial-up connection.

  1. Incorporating creditors' seniority into contingent claim models: Applicarion to peripheral euro area countries

    OpenAIRE

    Gómez-Puig, Marta; Singh, Manish Kumar; Sosvilla Rivero, Simón, 1961-

    2018-01-01

    This paper highlights the role of multilateral creditors (i.e., the ECB, IMF, ESM etc.) and their preferred creditor status in explaining the sovereign default risk of peripheral euro area (EA) countries. Incorporating lessons from sovereign debt crises in general, and from the Greek debt restructuring in particular, we define the priority structure of sovereigns' creditors that is most relevant for peripheral EA countries in severe crisis episodes. This new priority structure of creditors, t...

  2. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A unified elasto-viscoplastic constitutive model

    International Nuclear Information System (INIS)

    Chen, Ming-Song; Lin, Y.C.; Li, Kuo-Kuo; Chen, Jian

    2016-01-01

    In authors' previous work (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0371-6, 2016), the nonlinear unloading behavior of a typical Ni-based superalloy was investigated by hot compressive experiments with intermediate unloading-reloading cycles. The characters of unloading curves were discussed in detail, and a new elasto-viscoplastic constitutive model was proposed to describe the nonlinear unloading behavior of the studied Ni-based superalloy. Still, the functional relationships between the deformation temperature, strain rate, pre-strain and the parameters of the proposed constitutive model need to be established. In this study, the effects of deformation temperature, strain rate and pre-strain on the parameters of the new constitutive model proposed in authors' previous work (Chen et al. 2016) are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate and pre-strain. (orig.)

  3. Incorporation of oxygen contribution by plant roots into classical dissolved oxygen deficit model for a subsurface flow treatment wetland.

    Science.gov (United States)

    Bezbaruah, Achintya N; Zhang, Tian C

    2009-01-01

    It has been long established that plants play major roles in a treatment wetland. However, the role of plants has not been incorporated into wetland models. This study tries to incorporate wetland plants into a biochemical oxygen demand (BOD) model so that the relative contributions of the aerobic and anaerobic processes to meeting BOD can be quantitatively determined. The classical dissolved oxygen (DO) deficit model has been modified to simulate the DO curve for a field subsurface flow constructed wetland (SFCW) treating municipal wastewater. Sensitivities of model parameters have been analyzed. Based on the model it is predicted that in the SFCW under study about 64% BOD are degraded through aerobic routes and 36% is degraded anaerobically. While not exhaustive, this preliminary work should serve as a pointer for further research in wetland model development and to determine the values of some of the parameters used in the modified DO deficit and associated BOD model. It should be noted that nitrogen cycle and effects of temperature have not been addressed in these models for simplicity of model formulation. This paper should be read with this caveat in mind.

  4. A real case simulation of the air-borne effluent dispersion on a typical summer day under CDA scenario for PFBR using an advanced meteorological and dispersion model

    International Nuclear Information System (INIS)

    Srinivas, C.V; Venkatesan, R.; Bagavath Singh, A.; Somayaji, K.M.

    2003-11-01

    Environmental concentrations and radioactive doses within and beyond the site boundary for the CDA situation of PFBR have been estimated using an Advanced Radiological Impact Prediction system for a real atmospheric situation on a typical summer day in the month of May 2003. The system consists of a meso-scale atmospheric prognostic model MM5 coupled with a random walk Lagrangian particle dispersion model FLEXPART for the simulation of transport, diffusion and deposition of radio nuclides. The details of the modeling system, its capabilities and various features are presented. The model has been validated for the simulated coastal atmospheric features of land-sea breeze, development of TIBL etc., with site and regional meteorological observations from IMD. Analysis of the dose distribution in a situation that corresponds to the atmospheric conditions on the chosen day shows that the doses for CDA through different pathways are 8 times less than the earlier estimations made according to regulatory requirements using the Gaussian Plume Model (GPM) approach. However for stack releases a higher dose than was reported earlier occurred beyond the site boundary at 2-4 km range under stable and fumigation conditions. The doses due to stack releases under these conditions maintained almost the same value in 3 to 10 km range and decreased there after. Deposition velocities computed from radionuclide species, wind speed, surface properties were 2 orders lower than the values used earlier and hence gave more realistic estimates of ground deposited activity. The study has enabled to simulate the more complex meteorological situation that actually is present at the site of interest and the associated spatial distribution of radiological impact around Kalpakkam. In order to draw meaningful conclusion that can be compared with regulatory estimates future study would be undertaken to simulate the dispersion under extreme meteorological situations which could possibly be worse than

  5. A mathematical model to determine incorporated quantities of radioactivity from the measured photometric values of tritium-autoradiographs in neuroanatomy

    International Nuclear Information System (INIS)

    Jennissen, J.J.

    1981-01-01

    The mathematical/empirical model developed in this paper helps to determine the incorporated radioactivity from the measured photometric values and the exposure time T. Possible errors of autoradiography due to the exposure time or the preparation are taken into consideration by the empirical model. It is shown that the error of appr. 400% appearing in the sole comparison of the measured photometric values can be corrected. The model is valid for neuroanatomy as optical nerves, i.e. neuroanatomical material, were used to develop it. Its application also to the other sections of the central nervous system seems to be justified due to the reduction of errors thus achieved. (orig.) [de

  6. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  7. Incorporating wind availability into land use regression modelling of air quality in mountainous high-density urban environment.

    Science.gov (United States)

    Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward

    2017-08-01

    Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    Directory of Open Access Journals (Sweden)

    Chuan Luo

    2015-09-01

    Full Text Available The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS model to predict runoff, total nitrogen (TN and total phosphorus (TP loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  9. A mechano-regulatory bone-healing model incorporating cell-phenotype specific activity

    NARCIS (Netherlands)

    Isaksson, H.E.; Donkelaar, van C.C.; Huiskes, R.; Ito, K.

    2008-01-01

    Phenomenological computational models of tissue regeneration and bone healing have been only partially successful in predicting experimental observations. This may be a result of simplistic modeling of cellular activity. Furthermore, phenomenological models are limited when considering the effects

  10. [Incorporation of an organic MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using independent data sources]. [MAGIC Model

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.J.

    1992-09-01

    A project was initiated in March, 1992 to (1) incorporate a rigorous organic acid representation, based on empirical data and geochemical considerations, into the MAGIC model of acidification response, and (2) test the revised model using three sets of independent data. After six months of performance, the project is on schedule and the majority of the tasks outlined for Year 1 have been successfully completed. Major accomplishments to data include development of the organic acid modeling approach, using data from the Adirondack Lakes Survey Corporation (ALSC), and coupling the organic acid model with MAGIC for chemical hindcast comparisons. The incorporation of an organic acid representation into MAGIC can account for much of the discrepancy earlier observed between MAGIC hindcasts and paleolimnological reconstructions of preindustrial pH and alkalinity for 33 statistically-selected Adirondack lakes. Additional work is on-going for model calibration and testing with data from two whole-catchment artificial acidification projects. Results obtained thus far are being prepared as manuscripts for submission to the peer-reviewed scientific literature.

  11. Glucose oxidase incorporated collagen matrices for dermal wound repair in diabetic rat models: a biochemical study.

    Science.gov (United States)

    Arul, V; Masilamoni, J G; Jesudason, E P; Jaji, P J; Inayathullah, M; Dicky John, D G; Vignesh, S; Jayakumar, R

    2012-05-01

    Impaired wound healing in diabetes is a well-documented phenomenon. Emerging data favor the involvement of free radicals in the pathogenesis of diabetic wound healing. We investigated the beneficial role of the sustained release of reactive oxygen species (ROS) in diabetic dermal wound healing. In order to achieve the sustained delivery of ROS in the wound bed, we have incorporated glucose oxidase in the collagen matrix (GOIC), which is applied to the healing diabetic wound. Our in vitro proteolysis studies on incorporated GOIC show increased stability against the proteases in the collagen matrix. In this study, GOIC film and collagen film (CF) are used as dressing material on the wound of streptozotocin-induced diabetic rats. A significant increase in ROS (p < 0.05) was observed in the fibroblast of GOIC group during the inflammation period compared to the CF and control groups. This elevated level up regulated the antioxidant status in the granulation tissue and improved cellular proliferation in the GOIC group. Interestingly, our biochemical parameters nitric oxide, hydroxyproline, uronic acid, protein, and DNA content in the healing wound showed that there is an increase in proliferation of cells in GOIC when compared to the control and CF groups. In addition, evidence from wound contraction and histology reveals faster healing in the GOIC group. Our observations document that GOIC matrices could be effectively used for diabetic wound healing therapy.

  12. Incorporating Psychological Predictors of Treatment Response into Health Economic Simulation Models: A Case Study in Type 1 Diabetes.

    Science.gov (United States)

    Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan

    2015-10-01

    . Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies.

  13. Incorporating population viability models into species status assessment and listing decisions under the U.S. Endangered Species Act

    Directory of Open Access Journals (Sweden)

    Conor P. McGowan

    2017-10-01

    Full Text Available Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises in transparent and explicit ways tailored to support decision making processes related to endangered species.

  14. Incorporating population viability models into species status assessment and listing decisions under the U.S. Endangered Species Act

    Science.gov (United States)

    McGowan, Conor P.; Allan, Nathan; Servoss, Jeff; Hedwall, Shaula J.; Wooldridge, Brian

    2017-01-01

    Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises) in transparent and explicit ways tailored to support decision making processes related to endangered species.

  15. Simplex network modeling for press-molded ceramic bodies incorporated with granite waste

    International Nuclear Information System (INIS)

    Pedroti, L.G.; Vieira, C.M.F.; Alexandre, J.; Monteiro, S.N.; Xavier, G.C.

    2012-01-01

    Extrusion of a clay body is the most commonly applied process in the ceramic industries for manufacturing structural block. Nowadays, the assembly of such blocks through a fitting system that facilitates the final mounting is gaining attention owing to the saving in material and reducing in the cost of the building construction. In this work, the ideal composition of clay bodies incorporated with granite powder waste was investigated for the production of press-molded ceramic blocks. An experimental design was applied to determine the optimum properties and microstructures involving not only the precursors compositions but also the press and temperature conditions. Press load from 15 ton and temperatures from 850 to 1050°C were considered. The results indicated that varying mechanical strength of 2 MPa to 20 MPa and varying water absorption of 19% to 30%. (author)

  16. A new general methodology for incorporating physico-chemical transformations into multi-phase wastewater treatment process models.

    Science.gov (United States)

    Lizarralde, I; Fernández-Arévalo, T; Brouckaert, C; Vanrolleghem, P; Ikumi, D S; Ekama, G A; Ayesa, E; Grau, P

    2015-05-01

    This paper introduces a new general methodology for incorporating physico-chemical and chemical transformations into multi-phase wastewater treatment process models in a systematic and rigorous way under a Plant-Wide modelling (PWM) framework. The methodology presented in this paper requires the selection of the relevant biochemical, chemical and physico-chemical transformations taking place and the definition of the mass transport for the co-existing phases. As an example a mathematical model has been constructed to describe a system for biological COD, nitrogen and phosphorus removal, liquid-gas transfer, precipitation processes, and chemical reactions. The capability of the model has been tested by comparing simulated and experimental results for a nutrient removal system with sludge digestion. Finally, a scenario analysis has been undertaken to show the potential of the obtained mathematical model to study phosphorus recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Science.gov (United States)

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  18. A Mass Balance Model for Designing Green Roof Systems that Incorporate a Cistern for Re-Use

    Directory of Open Access Journals (Sweden)

    Manoj Chopra

    2012-11-01

    Full Text Available Green roofs, which have been used for several decades in many parts of the world, offer a unique and sustainable approach to stormwater management. Within this paper, evidence is presented on water retention for an irrigated green roof system. The presented green roof design results in a water retention volume on site. A first principle mass balance computer model is introduced to assist with the design of these green roof systems which incorporate a cistern to capture and reuse runoff waters for irrigation of the green roof. The model is used to estimate yearly stormwater retention volume for different cistern storage volumes. Additionally, the Blaney and Criddle equation is evaluated for estimation of monthly evapotranspiration rates for irrigated systems and incorporated into the model. This is done so evapotranspiration rates can be calculated for regions where historical data does not exist, allowing the model to be used anywhere historical weather data are available. This model is developed and discussed within this paper as well as compared to experimental results.

  19. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A new elasto-viscoplastic constitutive model

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Ming-Song; Li, Kuo-Kuo [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China); Lin, Y.C. [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China); Central South University, Light Alloy Research Institute, Changsha (China); Chen, Jian [Changsha University of Science and Technology, School of Energy and Power Engineering, Key Laboratory of Efficient and Clean Energy Utilization, Changsha (China)

    2016-09-15

    The nonlinear unloading behavior of a typical Ni-based superalloy is investigated by hot compressive experiments with intermediate unloading-reloading cycles. The experimental results show that there are at least four types of unloading curves. However, it is found that there is no essential difference among four types of unloading curves. The variation curves of instantaneous Young's modulus with stress for all types of unloading curves include four segments, i.e., three linear elastic segments (segments I, II, and III) and one subsequent nonlinear elastic segment (segment IV). The instantaneous Young's modulus of segments I and III is approximately equal to that of reloading process, while smaller than that of segment II. In the nonlinear elastic segment, the instantaneous Young's modulus linearly decreases with the decrease in stress. In addition, the relationship between stress and strain rate can be accurately expressed by the hyperbolic sine function. This study includes two parts. In the present part, the characters of unloading curves are discussed in detail, and a new elasto-viscoplastic constitutive model is proposed to describe the nonlinear unloading behavior based on the experimental findings. While in the latter part (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0385-0, 2016), the effects of deformation temperature, strain rate, and pre-strain on the parameters of this new constitutive model are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate, and pre-strain. (orig.)

  20. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A new elasto-viscoplastic constitutive model

    International Nuclear Information System (INIS)

    Chen, Ming-Song; Li, Kuo-Kuo; Lin, Y.C.; Chen, Jian

    2016-01-01

    The nonlinear unloading behavior of a typical Ni-based superalloy is investigated by hot compressive experiments with intermediate unloading-reloading cycles. The experimental results show that there are at least four types of unloading curves. However, it is found that there is no essential difference among four types of unloading curves. The variation curves of instantaneous Young's modulus with stress for all types of unloading curves include four segments, i.e., three linear elastic segments (segments I, II, and III) and one subsequent nonlinear elastic segment (segment IV). The instantaneous Young's modulus of segments I and III is approximately equal to that of reloading process, while smaller than that of segment II. In the nonlinear elastic segment, the instantaneous Young's modulus linearly decreases with the decrease in stress. In addition, the relationship between stress and strain rate can be accurately expressed by the hyperbolic sine function. This study includes two parts. In the present part, the characters of unloading curves are discussed in detail, and a new elasto-viscoplastic constitutive model is proposed to describe the nonlinear unloading behavior based on the experimental findings. While in the latter part (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0385-0, 2016), the effects of deformation temperature, strain rate, and pre-strain on the parameters of this new constitutive model are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate, and pre-strain. (orig.)

  1. A Typical Synergy

    Science.gov (United States)

    van Noort, Thomas; Achten, Peter; Plasmeijer, Rinus

    We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.

  2. Incorporating Cold Cap Behavior in a Joule-heated Waste Glass Melter Model

    Energy Technology Data Exchange (ETDEWEB)

    Varija Agarwal; Donna Post Guillen

    2013-08-01

    In this paper, an overview of Joule-heated waste glass melters used in the vitrification of high level waste (HLW) is presented, with a focus on the cold cap region. This region, in which feed-to-glass conversion reactions occur, is critical in determining the melting properties of any given glass melter. An existing 1D computer model of the cold cap, implemented in MATLAB, is described in detail. This model is a standalone model that calculates cold cap properties based on boundary conditions at the top and bottom of the cold cap. Efforts to couple this cold cap model with a 3D STAR-CCM+ model of a Joule-heated melter are then described. The coupling is being implemented in ModelCenter, a software integration tool. The ultimate goal of this model is to guide the specification of melter parameters that optimize glass quality and production rate.

  3. Incorporating shape constraints in generalized additive modelling of the height-diameter relationship for Norway spruce

    Directory of Open Access Journals (Sweden)

    Natalya Pya

    2016-02-01

    Full Text Available Background: Measurements of tree heights and diameters are essential in forest assessment and modelling. Tree heights are used for estimating timber volume, site index and other important variables related to forest growth and yield, succession and carbon budget models. However, the diameter at breast height (dbh can be more accurately obtained and at lower cost, than total tree height. Hence, generalized height-diameter (h-d models that predict tree height from dbh, age and other covariates are needed. For a more flexible but biologically plausible estimation of covariate effects we use shape constrained generalized additive models as an extension of existing h-d model approaches. We use causal site parameters such as index of aridity to enhance the generality and causality of the models and to enable predictions under projected changeable climatic conditions. Methods: We develop unconstrained generalized additive models (GAM and shape constrained generalized additive models (SCAM for investigating the possible effects of tree-specific parameters such as tree age, relative diameter at breast height, and site-specific parameters such as index of aridity and sum of daily mean temperature during vegetation period, on the h-d relationship of forests in Lower Saxony, Germany. Results: Some of the derived effects, e.g. effects of age, index of aridity and sum of daily mean temperature have significantly non-linear pattern. The need for using SCAM results from the fact that some of the model effects show partially implausible patterns especially at the boundaries of data ranges. The derived model predicts monotonically increasing levels of tree height with increasing age and temperature sum and decreasing aridity and social rank of a tree within a stand. The definition of constraints leads only to marginal or minor decline in the model statistics like AIC. An observed structured spatial trend in tree height is modelled via 2-dimensional surface

  4. A selenium-deficient Caco-2 cell model for assessing differential incorporation of chemical or food selenium into glutathione peroxidase.

    Science.gov (United States)

    Zeng, Huawei; Botnen, James H; Johnson, Luann K

    2008-01-01

    Assessing the ability of a selenium (Se) sample to induce cellular glutathione peroxidase (GPx) activity in Se-deficient animals is the most commonly used method to determine Se bioavailability. Our goal is to establish a Se-deficient cell culture model with differential incorporation of Se chemical forms into GPx, which may complement the in vivo studies. In the present study, we developed a Se-deficient Caco-2 cell model with a serum gradual reduction method. It is well recognized that selenomethionine (SeMet) is the major nutritional source of Se; therefore, SeMet, selenite, or methylselenocysteine (SeMSC) was added to cell culture media with different concentrations and treatment time points. We found that selenite and SeMSC induced GPx more rapidly than SeMet. However, SeMet was better retained as it is incorporated into proteins in place of methionine; compared with 8-, 24-, or 48-h treatment, 72-h Se treatment was a more sensitive time point to measure the potential of GPx induction in all tested concentrations. Based on induction of GPx activity, the cellular bioavailability of Se from an extract of selenobroccoli after a simulated gastrointestinal digestion was comparable with that of SeMSC and SeMet. These in vitro data are, for the first time, consistent with previous published data regarding selenite and SeMet bioavailability in animal models and Se chemical speciation studies with broccoli. Thus, Se-deficient Caco-2 cell model with differential incorporation of chemical or food forms of Se into GPx provides a new tool to study the cellular mechanisms of Se bioavailability.

  5. Integrative modelling of animal movement: incorporating in situ habitat and behavioural information for a migratory marine predator.

    Science.gov (United States)

    Bestley, Sophie; Jonsen, Ian D; Hindell, Mark A; Guinet, Christophe; Charrassin, Jean-Benoît

    2013-01-07

    A fundamental goal in animal ecology is to quantify how environmental (and other) factors influence individual movement, as this is key to understanding responsiveness of populations to future change. However, quantitative interpretation of individual-based telemetry data is hampered by the complexity of, and error within, these multi-dimensional data. Here, we present an integrative hierarchical Bayesian state-space modelling approach where, for the first time, the mechanistic process model for the movement state of animals directly incorporates both environmental and other behavioural information, and observation and process model parameters are estimated within a single model. When applied to a migratory marine predator, the southern elephant seal (Mirounga leonina), we find the switch from directed to resident movement state was associated with colder water temperatures, relatively short dive bottom time and rapid descent rates. The approach presented here can have widespread utility for quantifying movement-behaviour (diving or other)-environment relationships across species and systems.

  6. Incorporating Latent Variables into Discrete Choice Models - A Simultaneous Estimation Approach Using SEM Software

    Directory of Open Access Journals (Sweden)

    Dirk Temme

    2008-12-01

    Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.

  7. Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model

    DEFF Research Database (Denmark)

    Olivares Hernandez, Roberto

    Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...

  8. Quantitative analysis of CT brain images: a statistical model incorporating partial volume and beam hardening effects

    International Nuclear Information System (INIS)

    McLoughlin, R.F.; Ryan, M.V.; Heuston, P.M.; McCoy, C.T.; Masterson, J.B.

    1992-01-01

    The purpose of this study was to construct and evaluate a statistical model for the quantitative analysis of computed tomographic brain images. Data were derived from standard sections in 34 normal studies. A model representing the intercranial pure tissue and partial volume areas, with allowance for beam hardening, was developed. The average percentage error in estimation of areas, derived from phantom tests using the model, was 28.47%. We conclude that our model is not sufficiently accurate to be of clinical use, even though allowance was made for partial volume and beam hardening effects. (author)

  9. Incorporation of Satellite Data and Uncertainty in a Nationwide Groundwater Recharge Model in New Zealand

    Directory of Open Access Journals (Sweden)

    Rogier Westerhoff

    2018-01-01

    Full Text Available A nationwide model of groundwater recharge for New Zealand (NGRM, as described in this paper, demonstrated the benefits of satellite data and global models to improve the spatial definition of recharge and the estimation of recharge uncertainty. NGRM was inspired by the global-scale WaterGAP model but with the key development of rainfall recharge calculation on scales relevant to national- and catchment-scale studies (i.e., a 1 km × 1 km cell size and a monthly timestep in the period 2000–2014 provided by satellite data (i.e., MODIS-derived evapotranspiration, AET and vegetation in combination with national datasets of rainfall, elevation, soil and geology. The resulting nationwide model calculates groundwater recharge estimates, including their uncertainty, consistent across the country, which makes the model unique compared to all other New Zealand estimates targeted towards groundwater recharge. At the national scale, NGRM estimated an average recharge of 2500 m 3 /s, or 298 mm/year, with a model uncertainty of 17%. Those results were similar to the WaterGAP model, but the improved input data resulted in better spatial characteristics of recharge estimates. Multiple uncertainty analyses led to these main conclusions: the NGRM model could give valuable initial estimates in data-sparse areas, since it compared well to most ground-observed lysimeter data and local recharge models; and the nationwide input data of rainfall and geology caused the largest uncertainty in the model equation, which revealed that the satellite data could improve spatial characteristics without significantly increasing the uncertainty. Clearly the increasing volume and availability of large-scale satellite data is creating more opportunities for the application of national-scale models at the catchment, and smaller, scales. This should result in improved utility of these models including provision of initial estimates in data-sparse areas. Topics for future

  10. A mathematical model for maximizing the value of phase 3 drug development portfolios incorporating budget constraints and risk.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa

    2013-05-10

    We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  12. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    Science.gov (United States)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  13. Incorporating technology buying behaviour into UK-based long term domestic stock energy models to provide improved policy analysis

    International Nuclear Information System (INIS)

    Lee, Timothy; Yao, Runming

    2013-01-01

    The UK has a target for an 80% reduction in CO 2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures. - Highlights: ► Long term energy models are reviewed with a focus on UK domestic stock models. ► Existing models are found weak in modelling green technology buying behaviour. ► Agent models, Markov chains and neural networks are considered as solutions. ► Agent-based modelling (ABM) is found to be the most promising approach. ► A prototype ABM is developed and testing indicates a lot of potential.

  14. Using expert knowledge to incorporate uncertainty in cause-of-death assignments for modeling of cause-specific mortality

    Science.gov (United States)

    Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.

    2018-01-01

    Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection

  15. Incorporating NDVI in a gravity model setting to describe spatio-temporal patterns of Lyme borreliosis incidence

    Science.gov (United States)

    Barrios, J. M.; Verstraeten, W. W.; Farifteh, J.; Maes, P.; Aerts, J. M.; Coppin, P.

    2012-04-01

    Lyme borreliosis (LB) is the most common tick-borne disease in Europe and incidence growth has been reported in several European countries during the last decade. LB is caused by the bacterium Borrelia burgdorferi and the main vector of this pathogen in Europe is the tick Ixodes ricinus. LB incidence and spatial spread is greatly dependent on environmental conditions impacting habitat, demography and trophic interactions of ticks and the wide range of organisms ticks parasite. The landscape configuration is also a major determinant of tick habitat conditions and -very important- of the fashion and intensity of human interaction with vegetated areas, i.e. human exposure to the pathogen. Hence, spatial notions as distance and adjacency between urban and vegetated environments are related to human exposure to tick bites and, thus, to risk. This work tested the adequacy of a gravity model setting to model the observed spatio-temporal pattern of LB as a function of location and size of urban and vegetated areas and the seasonal and annual change in the vegetation dynamics as expressed by MODIS NDVI. Opting for this approach implies an analogy with Newton's law of universal gravitation in which the attraction forces between two bodies are directly proportional to the bodies mass and inversely proportional to distance. Similar implementations have proven useful in fields like trade modeling, health care service planning, disease mapping among other. In our implementation, the size of human settlements and vegetated systems and the distance separating these landscape elements are considered the 'bodies'; and the 'attraction' between them is an indicator of exposure to pathogen. A novel element of this implementation is the incorporation of NDVI to account for the seasonal and annual variation in risk. The importance of incorporating this indicator of vegetation activity resides in the fact that alterations of LB incidence pattern observed the last decade have been ascribed

  16. Modeling the fate of p,p'-DDT in water and sediment of two typical estuarine bays in South China: Importance of fishing vessels' inputs.

    Science.gov (United States)

    Fang, Shu-Ming; Zhang, Xianming; Bao, Lian-Jun; Zeng, Eddy Y

    2016-05-01

    Antifouling paint applied to fishing vessels is the primary source of dichloro-diphenyl-trichloroethane (DDT) to the coastal marine environments of China. With the aim to provide science-based support of potential regulations on DDT use in antifouling paint, we utilized a fugacity-based model to evaluate the fate and impact of p,p'-DDT, the dominant component of DDT mixture, in Daya Bay and Hailing Bay, two typical estuarine bays in South China. The emissions of p,p'-DDT from fishing vessels to the aquatic environments of Hailing Bay and Daya Bay were estimated as 9.3 and 7.7 kg yr(-1), respectively. Uncertainty analysis indicated that the temporal variability of p,p'-DDT was well described by the model if fishing vessels were considered as the only direct source, i.e., fishing vessels should be the dominant source of p,p'-DDT in coastal bay areas of China. Estimated hazard quotients indicated that sediment in Hailing Bay posed high risk to the aquatic system, and it would take at least 21 years to reduce the hazards to a safe level. Moreover, p,p'-DDT tends to migrate from water to sediment in the entire Hailing Bay and Daya Bay. On the other hand, our previous research indicated that p,p'-DDT was more likely to migrate from sediment to water in the maricultured zones located in shallow waters of these two bays, where fishing vessels frequently remain. These findings suggest that relocating mariculture zones to deeper waters would reduce the likelihood of farmed fish contamination by p,p'-DDT. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Suspended Sediment Dynamics in the Macrotidal Seine Estuary (France): 2. Numerical Modeling of Sediment Fluxes and Budgets Under Typical Hydrological and Meteorological Conditions

    Science.gov (United States)

    Schulz, E.; Grasso, F.; Le Hir, P.; Verney, R.; Thouvenin, B.

    2018-01-01

    Understanding the sediment dynamics in an estuary is important for its morphodynamic and ecological assessment as well as, in case of an anthropogenically controlled system, for its maintenance. However, the quantification of sediment fluxes and budgets is extremely difficult from in-situ data and requires thoroughly validated numerical models. In the study presented here, sediment fluxes and budgets in the lower Seine Estuary were quantified and investigated from seasonal to annual time scales with respect to realistic hydro- and meteorological conditions. A realistic three-dimensional process-based hydro- and sediment-dynamic model was used to quantify mud and sand fluxes through characteristic estuarine cross-sections. In addition to a reference experiment with typical forcing, three experiments were carried out and analyzed, each differing from the reference experiment in either river discharge or wind and waves so that the effects of these forcings could be separated. Hydro- and meteorological conditions affect the sediment fluxes and budgets in different ways and at different locations. Single storm events induce strong erosion in the lower estuary and can have a significant effect on the sediment fluxes offshore of the Seine Estuary mouth, with the flux direction depending on the wind direction. Spring tides cause significant up-estuary fluxes at the mouth. A high river discharge drives barotropic down-estuary fluxes at the upper cross-sections, but baroclinic up-estuary fluxes at the mouth and offshore so that the lower estuary gains sediment during wet years. This behavior is likely to be observed worldwide in estuaries affected by density gradients and turbidity maximum dynamics.

  18. On Rationality of Decision Models Incorporating Emotion-Related Valuing and Hebbian Learning

    NARCIS (Netherlands)

    Treur, J.; Umair, M.

    2011-01-01

    In this paper an adaptive decision model based on predictive loops through feeling states is analysed from the perspective of rationality. Four different variations of Hebbian learning are considered for different types of connections in the decision model. To assess the extent of rationality, a

  19. Approaches to incorporating climate change effects in state and transition simulation models of vegetation

    Science.gov (United States)

    Becky K. Kerns; Miles A. Hemstrom; David Conklin; Gabriel I. Yospin; Bart Johnson; Dominique Bachelet; Scott Bridgham

    2012-01-01

    Understanding landscape vegetation dynamics often involves the use of scientifically-based modeling tools that are capable of testing alternative management scenarios given complex ecological, management, and social conditions. State-and-transition simulation model (STSM) frameworks and software such as PATH and VDDT are commonly used tools that simulate how landscapes...

  20. Incorporating additional tree and environmental variables in a lodgepole pine stem profile model

    Science.gov (United States)

    John C. Byrne

    1993-01-01

    A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...

  1. Mathematical Modelling in the Junior Secondary Years: An Approach Incorporating Mathematical Technology

    Science.gov (United States)

    Lowe, James; Carter, Merilyn; Cooper, Tom

    2018-01-01

    Mathematical models are conceptual processes that use mathematics to describe, explain, and/or predict the behaviour of complex systems. This article is written for teachers of mathematics in the junior secondary years (including out-of-field teachers of mathematics) who may be unfamiliar with mathematical modelling, to explain the steps involved…

  2. Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency

    Science.gov (United States)

    Su, Shiyang

    2017-01-01

    With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…

  3. Incorporating Video Modeling into a School-Based Intervention for Students with Autism Spectrum Disorders

    Science.gov (United States)

    Wilson, Kaitlyn P.

    2013-01-01

    Purpose: Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language…

  4. LINKING MICROBES TO CLIMATE: INCORPORATING MICROBIAL ACTIVITY INTO CLIMATE MODELS COLLOQUIUM

    Energy Technology Data Exchange (ETDEWEB)

    DeLong, Edward; Harwood, Caroline; Reid, Ann

    2011-01-01

    This report explains the connection between microbes and climate, discusses in general terms what modeling is and how it applied to climate, and discusses the need for knowledge in microbial physiology, evolution, and ecology to contribute to the determination of fluxes and rates in climate models. It recommends with a multi-pronged approach to address the gaps.

  5. Radiation induced muscositis as space flight risk. Model studies on X-ray and heavy ion irradiated typical oral mucosa models

    International Nuclear Information System (INIS)

    Tschachojan, Viktoria

    2014-01-01

    Humans in exomagnetospheric space are exposed to highly energetic heavy ion radiation which can be hardly shielded. Since radiation-induced mucositis constitutes a severe complication of heavy ion radiotherapy, it would also implicate a serious medical safety risk for the crew members during prolonged space flights such as missions to Moon or Mars. For assessment of risk developing radiation-induced mucositis, three-dimensional organotypic cultures of immortalized human keratinocytes and fibroblasts were irradiated with a 12 C particle beam at high energies or X-Rays. Immunofluorescence stainings were done from cryosections and radiation induced release of cytokines and chemokines was quantified by ELISA from culture supernatants. The major focuses of this study were on 4, 8, 24 and 48 hours after irradiation. The conducted analyses of our mucosa model showed many structural similarities with the native oral mucosa and authentic immunological responses to radiation exposure. Quantification of the DNA damage in irradiated mucosa models revealed about twice as many DSB after heavy-ion irradiation compared to X-rays at definite doses and time points, suggesting a higher gene toxicity of heavy ions. Nuclear factor κB activation was observed after treatment with X-rays or 12 C particles. An activation of NF κB p65 in irradiated samples could not be detected. ELISA analyses showed significantly higher interleukin 6 and interleukin 8 levels after irradiation with X-rays and 12 C particles compared to non-irradiated controls. However, only X-rays induced significantly higher levels of interleukin 1β. Analyses of TNF-α and IFN-γ showed no radiation-induced effects. Further analyses revealed a radiation-induced reduction in proliferation and loss of compactness in irradiated oral mucosa model, which would lead to local lesions in vivo. In this study we revealed that several pro-inflammatory markers and structural changes are induced by X-rays and heavy-ion irradiation

  6. A model for arsenic anti-site incorporation in GaAs grown by hydride vapor phase epitaxy

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, K. L.; Kuech, T. F. [Department of Chemical and Biological Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States)

    2014-12-28

    GaAs growth by hydride vapor phase epitaxy (HVPE) has regained interest as a potential route to low cost, high efficiency thin film photovoltaics. In order to attain the highest efficiencies, deep level defect incorporation in these materials must be understood and controlled. The arsenic anti-site defect, As{sub Ga} or EL2, is the predominant deep level defect in HVPE-grown GaAs. In the present study, the relationships between HVPE growth conditions and incorporation of EL2 in GaAs epilayers were determined. Epitaxial n-GaAs layers were grown under a wide range of deposition temperatures (T{sub D}) and gallium chloride partial pressures (P{sub GaCl}), and the EL2 concentration, [EL2], was determined by deep level transient spectroscopy. [EL2] agreed with equilibrium thermodynamic predictions in layers grown under conditions in which the growth rate, R{sub G}, was controlled by conditions near thermodynamic equilibrium. [EL2] fell below equilibrium levels when R{sub G} was controlled by surface kinetic processes, with the disparity increasing as R{sub G} decreased. The surface chemical composition during growth was determined to have a strong influence on EL2 incorporation. Under thermodynamically limited growth conditions, e.g., high T{sub D} and/or low P{sub GaCl}, the surface vacancy concentration was high and the bulk crystal was close to equilibrium with the vapor phase. Under kinetically limited growth conditions, e.g., low T{sub D} and/or high P{sub GaCl}, the surface attained a high GaCl coverage, blocking As adsorption. This competitive adsorption process reduced the growth rate and also limited the amount of arsenic that incorporated as As{sub Ga}. A defect incorporation model which accounted for the surface concentration of arsenic as a function of the growth conditions, was developed. This model was used to identify optimal growth parameters for the growth of thin films for photovoltaics, conditions in which a high growth rate and low [EL2] could be

  7. A LabVIEW model incorporating an open-loop arterial impedance and a closed-loop circulatory system.

    Science.gov (United States)

    Cole, R T; Lucas, C L; Cascio, W E; Johnson, T A

    2005-11-01

    While numerous computer models exist for the circulatory system, many are limited in scope, contain unwanted features or incorporate complex components specific to unique experimental situations. Our purpose was to develop a basic, yet multifaceted, computer model of the left heart and systemic circulation in LabVIEW having universal appeal without sacrificing crucial physiologic features. The program we developed employs Windkessel-type impedance models in several open-loop configurations and a closed-loop model coupling a lumped impedance and ventricular pressure source. The open-loop impedance models demonstrate afterload effects on arbitrary aortic pressure/flow inputs. The closed-loop model catalogs the major circulatory waveforms with changes in afterload, preload, and left heart properties. Our model provides an avenue for expanding the use of the ventricular equations through closed-loop coupling that includes a basic coronary circuit. Tested values used for the afterload components and the effects of afterload parameter changes on various waveforms are consistent with published data. We conclude that this model offers the ability to alter several circulatory factors and digitally catalog the most salient features of the pressure/flow waveforms employing a user-friendly platform. These features make the model a useful instructional tool for students as well as a simple experimental tool for cardiovascular research.

  8. Incorporating Daily Flood Control Objectives Into a Monthly Stochastic Dynamic Programing Model for a Hydroelectric Complex

    Science.gov (United States)

    Druce, Donald J.

    1990-01-01

    A monthly stochastic dynamic programing model was recently developed and implemented at British Columbia (B.C.) Hydro to provide decision support for short-term energy exports and, if necessary, for flood control on the Peace River in northern British Columbia. The model establishes the marginal cost of supplying energy from the B.C. Hydro system, as well as a monthly operating policy for the G.M. Shrum and Peace Canyon hydroelectric plants and the Williston Lake storage reservoir. A simulation model capable of following the operating policy then determines the probability of refilling Williston Lake and possible spill rates and volumes. Reservoir inflows are input to both models in daily and monthly formats. The results indicate that flood control can be accommodated without sacrificing significant export revenue.

  9. Incorporating daily flood control objectives into a monthly stochastic dynamic programming model for a hydroelectric complex

    Energy Technology Data Exchange (ETDEWEB)

    Druce, D.J. (British Columbia Hydro and Power Authority, Vancouver, British Columbia (Canada))

    1990-01-01

    A monthly stochastic dynamic programing model was recently developed and implemented at British Columbia (B.C.) Hydro to provide decision support for short-term energy exports and, if necessary, for flood control on the Peace River in northern British Columbia. The model established the marginal cost of supplying energy from the B.C. Hydro system, as well as a monthly operating policy for the G.M. Shrum and Peace Canyon hydroelectric plants and the Williston Lake storage reservoir. A simulation model capable of following the operating policy then determines the probability of refilling Williston Lake and possible spill rates and volumes. Reservoir inflows are input to both models in daily and monthly formats. The results indicate that flood control can be accommodated without sacrificing significant export revenue.

  10. Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification

    DEFF Research Database (Denmark)

    Sarkar, Achintya Kumar; Tan, Zheng-Hua

    2018-01-01

    -dependent. We show that the proposed method significantly reduces the error rates of text-dependent speaker verification for the non-target types: target-wrong and impostor-wrong while it maintains comparable TD-SV performance when impostors speak a correct utterance with respect to the conventional system......In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using...... the utterances of a particular pass-phrase. During training, pass-phrase specific target speaker models are derived from the particular PBM using the training data for the respective target model. While testing, the best PBM is first selected for the test utterance in the maximum likelihood (ML) sense...

  11. Incorporation of sedimentological data into a calibrated groundwater flow and transport model

    International Nuclear Information System (INIS)

    Williams, N.J.; Young, S.C.; Barton, D.H.; Hurst, B.T.

    1997-01-01

    Analysis suggests that a high hydraulic conductivity (K) zone is associated with a former river channel at the Portsmouth Gaseous Diffusion Plant (PORTS). A two-dimensional (2-D) and three-dimensional (3-D) groundwater flow model was developed base on a sedimentological model to demonstrate the performance of a horizontal well for plume capture. The model produced a flow field with magnitudes and directions consistent with flow paths inferred from historical trichloroethylene (TCE) plume data. The most dominant feature affecting the well's performance was preferential high- and low-K zones. Based on results from the calibrated flow and transport model, a passive groundwater collection system was designed and built. Initial flow rates and concentrations measured from a gravity-drained horizontal well agree closely to predicted values

  12. Improving the phenotype predictions of a yeast genome-scale metabolic model by incorporating enzymatic constraints

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Zhang, Xi-Cheng; Nilsson, Avlant

    2017-01-01

    , which act as limitations on metabolic fluxes, are not taken into account. Here, we present GECKO, a method that enhances a GEM to account for enzymes as part of reactions, thereby ensuring that each metabolic flux does not exceed its maximum capacity, equal to the product of the enzyme's abundance...... and turnover number. We applied GECKO to a Saccharomyces cerevisiae GEM and demonstrated that the new model could correctly describe phenotypes that the previous model could not, particularly under high enzymatic pressure conditions, such as yeast growing on different carbon sources in excess, coping...... with stress, or overexpressing a specific pathway. GECKO also allows to directly integrate quantitative proteomics data; by doing so, we significantly reduced flux variability of the model, in over 60% of metabolic reactions. Additionally, the model gives insight into the distribution of enzyme usage between...

  13. Radmap: ''as-built'' cad models incorporating geometrical, radiological and material information

    International Nuclear Information System (INIS)

    Piotrowski, L.; Lubawy, J.L.

    2001-01-01

    EDF intends to achieve successful and cost-effective dismantling of its obsolete nuclear plants. To reach this goal, EDF is currently extending its ''as-built'' 3-D modelling system to also include the location and characteristics of gamma sources in the geometrical models of its nuclear installations. The resulting system (called RADMAP) is a complete CAD chain covering 3-D and gamma data acquisitions, CAD modelling and exploitation of the final model. Its aim is to describe completely the geometrical and radiological state of a particular nuclear environment. This paper presents an overall view of RADMAP. The technical and functional characteristics of each element of the chain are indicated and illustrated using real (EDF) environments/applications. (author)

  14. PTL: A Propositional Typicality Logic

    CSIR Research Space (South Africa)

    Booth, R

    2012-09-01

    Full Text Available consequence relations first studied by Lehmann and col- leagues in the 90?s play a central role in nonmonotonic reasoning [13, 14]. This has been the case due to at least three main reasons. Firstly, they are based on semantic constructions that are elegant...) j ; 6j : ^ j PTL: A Propositional Typicality Logic 3 The semantics of (propositional) rational consequence is in terms of ranked models. These are partially ordered structures in which the ordering is modular. Definition 1. Given a set S...

  15. Incorporation of the time aspect into the liability-threshold model for case-control-family data

    DEFF Research Database (Denmark)

    Cederkvist, Luise; Holst, Klaus K.; Andersen, Klaus K.

    2017-01-01

    to estimates that are difficult to interpret and are potentially biased. We incorporate the time aspect into the liability-threshold model for case-control-family data following the same approach that has been applied in the twin setting. Thus, the data are considered as arising from a competing risks setting...... approach using simulation studies and apply it in the analysis of two Danish register-based case-control-family studies: one on cancer diagnosed in childhood and adolescence, and one on early-onset breast cancer....

  16. On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics

    Directory of Open Access Journals (Sweden)

    Jiang Gangyi

    2010-01-01

    Full Text Available The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.

  17. A qualitative comparison of fire spread models incorporating wind and slope effects

    Science.gov (United States)

    David R. Weise; Gregory S. Biging

    1997-01-01

    Wind velocity and slope are two critical variables that affect wildland fire rate of spread. The effects of these variables on rate of spread are often combined in rate-of-spread models using vector addition. The various methods used to combine wind and slope effects have seldom been validated or compared due to differences in the models or to lack of data. In this...

  18. Simulation of a severe convective storm using a numerical model with explicitly incorporated aerosols

    Science.gov (United States)

    Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje

    2017-09-01

    Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.

  19. Debris flow analysis with a one dimensional dynamic run-out model that incorporates entrained material

    Science.gov (United States)

    Luna, Byron Quan; Remaître, Alexandre; van Asch, Theo; Malet, Jean-Philippe; van Westen, Cees

    2010-05-01

    Estimating the magnitude and the intensity of rapid landslides like debris flows is fundamental to evaluate quantitatively the hazard in a specific location. Intensity varies through the travelled course of the flow and can be described by physical features such as deposited volume, velocities, height of the flow, impact forces and pressures. Dynamic run-out models are able to characterize the distribution of the material, its intensity and define the zone where the elements will experience an impact. These models can provide valuable inputs for vulnerability and risk calculations. However, most dynamic run-out models assume a constant volume during the motion of the flow, ignoring the important role of material entrained along its path. Consequently, they neglect that the increase of volume enhances the mobility of the flow and can significantly influence the size of the potential impact area. An appropriate erosion mechanism needs to be established in the analyses of debris flows that will improve the results of dynamic modeling and consequently the quantitative evaluation of risk. The objective is to present and test a simple 1D debris flow model with a material entrainment concept based on limit equilibrium considerations and the generation of excess pore water pressure through undrained loading of the in situ bed material. The debris flow propagation model is based on a one dimensional finite difference solution of a depth-averaged form of the Navier-Stokes equations of fluid motions. The flow is treated as a laminar one phase material, which behavior is controlled by a visco-plastic Coulomb-Bingham rheology. The model parameters are evaluated and the model performance is tested on a debris flow event that occurred in 2003 in the Faucon torrent (Southern French Alps).

  20. Creating a process for incorporating epidemiological modelling into outbreak management decisions.

    Science.gov (United States)

    Akselrod, Hana; Mercon, Monica; Kirkeby Risoe, Petter; Schlegelmilch, Jeffrey; McGovern, Joanne; Bogucki, Sandy

    2012-01-01

    Modern computational models of infectious diseases greatly enhance our ability to understand new infectious threats and assess the effects of different interventions. The recently-released CDC Framework for Preventing Infectious Diseases calls for increased use of predictive modelling of epidemic emergence for public health preparedness. Currently, the utility of these technologies in preparedness and response to outbreaks is limited by gaps between modelling output and information requirements for incident management. The authors propose an operational structure that will facilitate integration of modelling capabilities into action planning for outbreak management, using the Incident Command System (ICS) and Synchronization Matrix framework. It is designed to be adaptable and scalable for use by state and local planners under the National Response Framework (NRF) and Emergency Support Function #8 (ESF-8). Specific epidemiological modelling requirements are described, and integrated with the core processes for public health emergency decision support. These methods can be used in checklist format to align prospective or real-time modelling output with anticipated decision points, and guide strategic situational assessments at the community level. It is anticipated that formalising these processes will facilitate translation of the CDC's policy guidance from theory to practice during public health emergencies involving infectious outbreaks.

  1. A new mathematical model of gastrointestinal transit incorporating age- and gender-dependent physiological parameters

    International Nuclear Information System (INIS)

    Stubbs, J.B.

    1992-01-01

    As part of the revision by the International Commission on Radiological Protection (ICRP) of its report on Reference Man, an extensive review of the literature regarding anatomy and morphology of the gastrointestinal (GI) tract has been completed. Data on age- and gender-dependent GI physiology and motility may be included in the proposed ICRP report. A new mathematical model describing the transit of substances through the GI tract as well as the absorption and secretion of material in the GI tract has been developed. This mathematical description of GI tract kinetics utilizes more physiologically accurate transit processes than the mathematically simple, but nonphysiological, GI tract model that was used in ICRP Report 30. The proposed model uses a combination of zero- and first-order kinetics to describe motility. Some of the physiological parameters that the new model accounts for include sex, age, pathophysiological condition and meal phase (solid versus liquid). A computer algorithm, written in BASIC, based on this new model has been derived and results are compared to those of the ICRP-30 model

  2. Enhanced stability of car-following model upon incorporation of short-term driving memory

    Science.gov (United States)

    Liu, Da-Wei; Shi, Zhong-Ke; Ai, Wen-Huan

    2017-06-01

    Based on the full velocity difference model, a new car-following model is developed to investigate the effect of short-term driving memory on traffic flow in this paper. Short-term driving memory is introduced as the influence factor of driver's anticipation behavior. The stability condition of the newly developed model is derived and the modified Korteweg-de Vries (mKdV) equation is constructed to describe the traffic behavior near the critical point. Via numerical method, evolution of a small perturbation is investigated firstly. The results show that the improvement of this new car-following model over the previous ones lies in the fact that the new model can improve the traffic stability. Starting and breaking processes of vehicles in the signalized intersection are also investigated. The numerical simulations illustrate that the new model can successfully describe the driver's anticipation behavior, and that the efficiency and safety of the vehicles passing through the signalized intersection are improved by considering short-term driving memory.

  3. Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study

    Science.gov (United States)

    Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2013-04-01

    The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique

  4. A durability model incorporating safe life methodology and damage tolerance approach to assess first inspection and maintenance period for structures

    International Nuclear Information System (INIS)

    Xiong, J.J.; Shenoi, R.A.

    2009-01-01

    This paper outlines a new durability model to assess the first inspection and maintenance period for structures. Practical scatter factor formulae are presented to determine the safe fatigue crack initiation and propagation lives from the results of a single full-scale test of a complete structure. New theoretical solutions are proposed to determine the s a -s m -N surfaces of fatigue crack initiation and propagation. Prediction techniques are then developed to establish the relationship equation between safe fatigue crack initiation and propagation lives with a specific reliability level using a two-stage fatigue damage cumulative rule. A new durability model incorporating safe life and damage tolerance design approaches is derived to assess the first inspection and maintenance period. Finally, the proposed models are applied to assess the first inspection and maintenance period of a fastening structure at the root of helicopter blade.

  5. A durability model incorporating safe life methodology and damage tolerance approach to assess first inspection and maintenance period for structures

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, J.J. [Aircraft Department, Beihang University, Beijing 100083 (China); Shenoi, R.A. [School of Engineering Sciences, University of Southampton, Southampton SO17 1BJ (United Kingdom)], E-mail: r.a.shenoi@ship.soton.ac.uk

    2009-08-15

    This paper outlines a new durability model to assess the first inspection and maintenance period for structures. Practical scatter factor formulae are presented to determine the safe fatigue crack initiation and propagation lives from the results of a single full-scale test of a complete structure. New theoretical solutions are proposed to determine the s{sub a}-s{sub m}-N surfaces of fatigue crack initiation and propagation. Prediction techniques are then developed to establish the relationship equation between safe fatigue crack initiation and propagation lives with a specific reliability level using a two-stage fatigue damage cumulative rule. A new durability model incorporating safe life and damage tolerance design approaches is derived to assess the first inspection and maintenance period. Finally, the proposed models are applied to assess the first inspection and maintenance period of a fastening structure at the root of helicopter blade.

  6. Computer calculation of neutron cross sections with Hauser-Feshbach code STAPRE incorporating the hybrid pre-compound emission model

    International Nuclear Information System (INIS)

    Ivascu, M.

    1983-10-01

    Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations

  7. A model for determination of human foetus irradiation during intrauterine development when the mother incorporates iodine 131

    International Nuclear Information System (INIS)

    Vasilev, V.; Doncheva, B.

    1989-01-01

    A model is presented for irradiation calculation of human foetus during weeks 8-15 of the intrauterine development, when the mother chronically incorporates iodine 131. This period is critical for the nervous system of the foetus. Compared to some other author's models, the method proposed eliminates some uncertainties and takes into account the changes in the activity of mother's thyroid in time. The model is built on the base of data from 131 I-kinetics of pregnant women and experimental mice. A formula is proposed for total foetus irradiation calculation including: the internal γ and β irradiation; the external γ and β irradiation from the mother as a whole; and the external γ irradiation from the mother's thyroid

  8. The Isinglass Auroral Sounding Rocket Campaign: data synthesis incorporating remote sensing, in situ observations, and modelling

    Science.gov (United States)

    Lynch, K. A.; Clayton, R.; Roberts, T. M.; Hampton, D. L.; Conde, M.; Zettergren, M. D.; Burleigh, M.; Samara, M.; Michell, R.; Grubbs, G. A., II; Lessard, M.; Hysell, D. L.; Varney, R. H.; Reimer, A.

    2017-12-01

    The NASA auroral sounding rocket mission Isinglass was launched from Poker Flat Alaska in winter 2017. This mission consists of two separate multi-payload sounding rockets, over an array of groundbased observations, including radars and filtered cameras. The science goal is to collect two case studies, in two different auroral events, of the gradient scale sizes of auroral disturbances in the ionosphere. Data from the in situ payloads and the groundbased observations will be synthesized and fed into an ionospheric model, and the results will be studied to learn about which scale sizes of ionospheric structuring have significance for magnetosphere-ionosphere auroral coupling. The in situ instrumentation includes thermal ion sensors (at 5 points on the second flight), thermal electron sensors (at 2 points), DC magnetic fields (2 point), DC electric fields (one point, plus the 4 low-resource thermal ion RPA observations of drift on the second flight), and an auroral precipitation sensor (one point). The groundbased array includes filtered auroral imagers, the PFISR and SuperDarn radars, a coherent scatter radar, and a Fabry-Perot interferometer array. The ionospheric model to be used is a 3d electrostatic model including the effects of ionospheric chemistry. One observational and modelling goal for the mission is to move both observations and models of auroral arc systems into the third (along-arc) dimension. Modern assimilative tools combined with multipoint but low-resource observations allow a new view of the auroral ionosphere, that should allow us to learn more about the auroral zone as a coupled system. Conjugate case studies such as the Isinglass rocket flights allow for a test of the models' intepretation by comparing to in situ data. We aim to develop and improve ionospheric models to the point where they can be used to interpret remote sensing data with confidence without the checkpoint of in situ comparison.

  9. Incorporating harvest rates into the sex-age-kill model for white-tailed deer

    Science.gov (United States)

    Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.

    2013-01-01

    Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining

  10. Incorporation of lysosomal sequestration in the mechanistic model for prediction of tissue distribution of basic drugs.

    Science.gov (United States)

    Assmus, Frauke; Houston, J Brian; Galetin, Aleksandra

    2017-11-15

    The prediction of tissue-to-plasma water partition coefficients (Kpu) from in vitro and in silico data using the tissue-composition based model (Rodgers & Rowland, J Pharm Sci. 2005, 94(6):1237-48.) is well established. However, distribution of basic drugs, in particular into lysosome-rich lung tissue, tends to be under-predicted by this approach. The aim of this study was to develop an extended mechanistic model for the prediction of Kpu which accounts for lysosomal sequestration and the contribution of different cell types in the tissue of interest. The extended model is based on compound-specific physicochemical properties and tissue composition data to describe drug ionization, distribution into tissue water and drug binding to neutral lipids, neutral phospholipids and acidic phospholipids in tissues, including lysosomes. Physiological data on the types of cells contributing to lung, kidney and liver, their lysosomal content and lysosomal pH were collated from the literature. The predictive power of the extended mechanistic model was evaluated using a dataset of 28 basic drugs (pK a ≥7.8, 17 β-blockers, 11 structurally diverse drugs) for which experimentally determined Kpu data in rat tissue have been reported. Accounting for the lysosomal sequestration in the extended mechanistic model improved the accuracy of Kpu predictions in lung compared to the original Rodgers model (56% drugs within 2-fold or 88% within 3-fold of observed values). Reduction in the extent of Kpu under-prediction was also evident in liver and kidney. However, consideration of lysosomal sequestration increased the occurrence of over-predictions, yielding overall comparable model performances for kidney and liver, with 68% and 54% of Kpu values within 2-fold error, respectively. High lysosomal concentration ratios relative to cytosol (>1000-fold) were predicted for the drugs investigated; the extent differed depending on the lysosomal pH and concentration of acidic phospholipids among

  11. A kinematic wave model in Lagrangian coordinates incorporating capacity drop: Application to homogeneous road stretches and discontinuities

    Science.gov (United States)

    Yuan, Kai; Knoop, Victor L.; Hoogendoorn, Serge P.

    2017-01-01

    On freeways, congestion always leads to capacity drop. This means the queue discharge rate is lower than the pre-queue capacity. Our recent research findings indicate that the queue discharge rate increases with the speed in congestion, that is the capacity drop is strongly correlated with the congestion state. Incorporating this varying capacity drop into a kinematic wave model is essential for assessing consequences of control strategies. However, to the best of authors' knowledge, no such a model exists. This paper fills the research gap by presenting a Lagrangian kinematic wave model. "Lagrangian" denotes that the new model is solved in Lagrangian coordinates. The new model can give capacity drops accompanying both of stop-and-go waves (on homogeneous freeway section) and standing queues (at nodes) in a network. The new model can be applied in a network operation. In this Lagrangian kinematic wave model, the queue discharge rate (or the capacity drop) is a function of vehicular speed in traffic jams. Four case studies on links as well as at lane-drop and on-ramp nodes show that the Lagrangian kinematic wave model can give capacity drops well, consistent with empirical observations.

  12. Ultrasonically assisted drilling: A finite-element model incorporating acoustic softening effects

    International Nuclear Information System (INIS)

    Phadnis, V A; Roy, A; Silberschmidt, V V

    2013-01-01

    Ultrasonically assisted drilling (UAD) is a novel machining technique suitable for drilling in hard-to-machine quasi-brittle materials such as carbon fibre reinforced polymer composites (CFRP). UAD has been shown to possess several advantages compared to conventional drilling (CD), including reduced thrust forces, diminished burr formation at drill exit and an overall improvement in roundness and surface finish of the drilled hole. Recently, our in-house experiments of UAD in CFRP composites demonstrated remarkable reductions in thrust-force and torque measurements (average force reductions in excess of 80%) when compared to CD with the same machining parameters. In this study, a 3D finite-element model of drilling in CFRP is developed. In order to model acoustic (ultrasonic) softening effects, a phenomenological model, which accounts for ultrasonically induced plastic strain, was implemented in ABAQUS/Explicit. The model also accounts for dynamic frictional effects, which also contribute to the overall improved machining characteristics in UAD. The model is validated with experimental findings, where an excellent correlation between the reduced thrust force and torque magnitude was achieved

  13. High-resolution Continental Scale Land Surface Model incorporating Land-water Management in United States

    Science.gov (United States)

    Shin, S.; Pokhrel, Y. N.

    2016-12-01

    Land surface models have been used to assess water resources sustainability under changing Earth environment and increasing human water needs. Overwhelming observational records indicate that human activities have ubiquitous and pertinent effects on the hydrologic cycle; however, they have been crudely represented in large scale land surface models. In this study, we enhance an integrated continental-scale land hydrology model named Leaf-Hydro-Flood to better represent land-water management. The model is implemented at high resolution (5km grids) over the continental US. Surface water and groundwater are withdrawn based on actual practices. Newly added irrigation, water diversion, and dam operation schemes allow better simulations of stream flows, evapotranspiration, and infiltration. Results of various hydrologic fluxes and stores from two sets of simulation (one with and the other without human activities) are compared over a range of river basin and aquifer scales. The improved simulations of land hydrology have potential to build consistent modeling framework for human-water-climate interactions.

  14. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    Science.gov (United States)

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Incorporating microbial dormancy dynamics into soil decomposition models to improve quantification of soil carbon dynamics of northern temperate forests

    Energy Technology Data Exchange (ETDEWEB)

    He, Yujie [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Yang, Jinyan [Univ. of Georgia, Athens, GA (United States). Warnell School of Forestry and Natural Resources; Northeast Forestry Univ., Harbin (China). Center for Ecological Research; Zhuang, Qianlai [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Purdue Univ., West Lafayette, IN (United States). Dept. of Agronomy; Harden, Jennifer W. [U.S. Geological Survey, Menlo Park, CA (United States); McGuire, Anthony D. [Alaska Cooperative Fish and Wildlife Research Unit, U.S. Geological Survey, Univ. of Alaska, Fairbanks, AK (United States). U.S. Geological Survey, Alaska Cooperative Fish and Wildlife Research Unit; Liu, Yaling [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Wang, Gangsheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Climate Change Science Inst. and Environmental Sciences Division; Gu, Lianhong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division

    2015-11-20

    Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here in this study we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 PgCyr-1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil RH with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.

  16. Incorporating microbial dormancy dynamics into soil decomposition models to improve quantification of soil carbon dynamics of northern temperate forests

    Science.gov (United States)

    He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, A. David; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong

    2015-01-01

    Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4–0.6) in the simulated spatial pattern of soil RHwith both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = −0.43 to −0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.

  17. Periglacial processes incorporated into a long-term landscape evolution model

    DEFF Research Database (Denmark)

    Andersen, Jane Lund; Egholm, D.L.; Knudsen, Mads Faurschou

    Little is known about the long-term influence of periglacial processes on landscape evolution in cold areas, even though the efficiency of frost cracking on the breakdown of rocks has been documented by observations and experiments. Cold-room laboratory experiments show that a continuous water...... supply and sustained sub- zero temperatures are essential to develop fractures in porous rocks (e.g. Murton, 2006), but the cracking efficiency for harder rock types under natural conditions is less clear. However, based on experimental results for porous rocks, Hales and Roering (2007) proposed a model...... by their model and the elevation of scree deposits in the Southern Alps, New Zealand. This result suggests a link between frost-cracking efficiency and long-term landscape evolution and thus merits further investigations. Anderson et al. (2012) expanded this early model by including the effects of latent heat...

  18. Modeling & Informatics at Vertex Pharmaceuticals Incorporated: our philosophy for sustained impact.

    Science.gov (United States)

    McGaughey, Georgia; Patrick Walters, W

    2017-03-01

    Molecular modelers and informaticians have the unique opportunity to integrate cross-functional data using a myriad of tools, methods and visuals to generate information. Using their drug discovery expertise, information is transformed to knowledge that impacts drug discovery. These insights are often times formulated locally and then applied more broadly, which influence the discovery of new medicines. This is particularly true in an organization where the members are exposed to projects throughout an organization, such as in the case of the global Modeling & Informatics group at Vertex Pharmaceuticals. From its inception, Vertex has been a leader in the development and use of computational methods for drug discovery. In this paper, we describe the Modeling & Informatics group at Vertex and the underlying philosophy, which has driven this team to sustain impact on the discovery of first-in-class transformative medicines.

  19. ISG hybrid powertrain: a rule-based driver model incorporating look-ahead information

    Science.gov (United States)

    Shen, Shuiwen; Zhang, Junzhi; Chen, Xiaojiang; Zhong, Qing-Chang; Thornton, Roger

    2010-03-01

    According to European regulations, if the amount of regenerative braking is determined by the travel of the brake pedal, more stringent standards must be applied, otherwise it may adversely affect the existing vehicle safety system. The use of engine or vehicle speed to derive regenerative braking is one way to avoid strict design standards, but this introduces discontinuity in powertrain torque when the driver releases the acceleration pedal or applies the brake pedal. This is shown to cause oscillations in the pedal input and powertrain torque when a conventional driver model is adopted. Look-ahead information, together with other predicted vehicle states, are adopted to control the vehicle speed, in particular, during deceleration, and to improve the driver model so that oscillations can be avoided. The improved driver model makes analysis and validation of the control strategy for an integrated starter generator (ISG) hybrid powertrain possible.

  20. The design of a wind tunnel VSTOL fighter model incorporating turbine powered engine simulators

    Science.gov (United States)

    Bailey, R. O.; Maraz, M. R.; Hiley, P. E.

    1981-01-01

    A wind-tunnel model of a supersonic VSTOL fighter aircraft configuration has been developed for use in the evaluation of airframe-propulsion system aerodynamic interactions. The model may be employed with conventional test techniques, where configuration aerodynamics are measured in a flow-through mode and incremental nozzle-airframe interactions are measured in a jet-effects mode, and with the Compact Multimission Aircraft Propulsion Simulator which is capable of the simultaneous simulation of inlet and exhaust nozzle flow fields so as to allow the evaluation of the extent of inlet and nozzle flow field coupling. The basic configuration of the twin-engine model has a geometrically close-coupled canard and wing, and a moderately short nacelle with nonaxisymmetric vectorable exhaust nozzles near the wing trailing edge, and may be converted to a canardless configuration with an extremely short nacelle. Testing is planned to begin in the summer of 1982.

  1. Incorporating photon recycling into the analytical drift-diffusion model of high efficiency solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Lumb, Matthew P. [The George Washington University, 2121 I Street NW, Washington, DC 20037 (United States); Naval Research Laboratory, Washington, DC 20375 (United States); Steiner, Myles A.; Geisz, John F. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Walters, Robert J. [Naval Research Laboratory, Washington, DC 20375 (United States)

    2014-11-21

    The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close to the fundamental efficiency limit.

  2. Experimental validation of a Monte Carlo proton therapy nozzle model incorporating magnetically steered protons

    International Nuclear Information System (INIS)

    Peterson, S W; Polf, J; Archambault, L; Beddar, S; Bues, M; Ciangaru, G; Smith, A

    2009-01-01

    The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.

  3. Incorporating implementation overheads in the analysis for the flexible spin-lock model

    NARCIS (Netherlands)

    Balasubramanian, S.M.N.; Afshar, S.; Gai, P.; Behnam, M.; Bril, R.J.

    2017-01-01

    The flexible spin-lock model (FSLM) unifies suspension-based and spin-based resource sharing protocols for partitioned fixed-priority preemptive scheduling based real-time multiprocessor platforms. Recent work has been done in defining the protocol for FSLM and providing a schedulability analysis

  4. Incorporation of leaf nitrogen observations for biochemical and environmental modeling of photosynthesis and evapotranspiration

    DEFF Research Database (Denmark)

    Bøgh, E.; Gjettermann, Birgitte; Abrahamsen, Per

    2007-01-01

    . While most canopy photosynthesis models assume an exponential vertical profile of leaf N contents in the canopy, the field measurements showed that well-fertilized fields may have a uniform or exponential profile, and senescent canopies have reduced levels of N contents in upper leaves. The sensitivity...

  5. Improved Path Loss Simulation Incorporating Three-Dimensional Terrain Model Using Parallel Coprocessors

    Directory of Open Access Journals (Sweden)

    Zhang Bin Loo

    2017-01-01

    Full Text Available Current network simulators abstract out wireless propagation models due to the high computation requirements for realistic modeling. As such, there is still a large gap between the results obtained from simulators and real world scenario. In this paper, we present a framework for improved path loss simulation built on top of an existing network simulation software, NS-3. Different from the conventional disk model, the proposed simulation also considers the diffraction loss computed using Epstein and Peterson’s model through the use of actual terrain elevation data to give an accurate estimate of path loss between a transmitter and a receiver. The drawback of high computation requirements is relaxed by offloading the computationally intensive components onto an inexpensive off-the-shelf parallel coprocessor, which is a NVIDIA GPU. Experiments are performed using actual terrain elevation data provided from United States Geological Survey. As compared to the conventional CPU architecture, the experimental result shows that a speedup of 20x to 42x is achieved by exploiting the parallel processing of GPU to compute the path loss between two nodes using terrain elevation data. The result shows that the path losses between two nodes are greatly affected by the terrain profile between these two nodes. Besides this, the result also suggests that the common strategy to place the transmitter in the highest position may not always work.

  6. A quantitative model of the cardiac ventricular cell incorporating the transverse-axial tubular system

    Czech Academy of Sciences Publication Activity Database

    Pásek, Michal; Christé, G.; Šimurda, J.

    2003-01-01

    Roč. 22, č. 3 (2003), s. 355-368 ISSN 0231-5882 R&D Projects: GA ČR GP204/02/D129 Institutional research plan: CEZ:AV0Z2076919 Keywords : cardiac cell * tubular system * quantitative modelling Subject RIV: BO - Biophysics Impact factor: 0.794, year: 2003

  7. Development of a prototype mesoscale computer model incorporating treatment of topography

    International Nuclear Information System (INIS)

    Apsimon, H.M.; Goddard, A.J.H.; Kitson, K.; Fawcett, M.

    1985-01-01

    More sophisticated models are now available to simulate dispersal of accidental radioactive releases to the atmosphere; these use mass-consistent windfields and attempt allowance for site-specific topographical features. Our aim has been to examine these techniques critically, develop where possible, and assess limitations and accuracy. The resulting windfield model WAFT uses efficient numerical techniques with improved orographic resolution and treatment of meteorological conditions. Time integrated air concentrations, dry and wet deposition are derived from TOMCATS, which applies Monte-Carlo techniques to an assembly of pseudo-particles representing the release, with specific attention to the role of large eddies and evolving inhomogeneous rainfields. These models have been assessed by application to hypothetical releases in complex terrain using data which would have been available in the event of an accident, and undertaking sensitivity studies. It is concluded that there is considerable uncertainty in results produced by such models; although they may be useful in post-facto analysis, such limitations cast doubt on their advantages relative to simpler techniques, with more modest requirements, during an actual emergency. (author)

  8. Teaching for Art Criticism: Incorporating Feldman's Critical Analysis Learning Model in Students' Studio Practice

    Science.gov (United States)

    Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib

    2016-01-01

    This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…

  9. Incorporating stakeholder perspectives into model-based scenarios : Exploring the futures of the Dutch gas sector

    NARCIS (Netherlands)

    Eker, S.; van Daalen, C.; Thissen, W.A.H.

    2017-01-01

    Several model-based, analytical approaches have been developed recently to deal with the deep uncertainty present in situations for which futures studies are conducted. These approaches focus on covering a wide variety of scenarios and searching for robust strategies. However, they generally do

  10. A Probabilistic Model of Visual Working Memory: Incorporating Higher Order Regularities into Working Memory Capacity Estimates

    Science.gov (United States)

    Brady, Timothy F.; Tenenbaum, Joshua B.

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…

  11. Process model for ammonia volatilization from anaerobic swine lagoons incorporating varying wind speeds and biogas bubbling

    Science.gov (United States)

    Ammonia volatilization from treatment lagoons varies widely with the total ammonia concentration, pH, temperature, suspended solids, atmospheric ammonia concentration above the water surface, and wind speed. Ammonia emissions were estimated with a process-based mechanistic model integrating ammonia ...

  12. Development of a mission-based funding model for undergraduate medical education: incorporation of quality.

    Science.gov (United States)

    Stagnaro-Green, Alex; Roe, David; Soto-Greene, Maria; Joffe, Russell

    2008-01-01

    Increasing financial pressures, along with a desire to realign resources with institutional priorities, has resulted in the adoption of mission-based funding (MBF) at many medical schools. The lack of inclusion of quality and the time and expense in developing and implementing mission based funding are major deficiencies in the models reported to date. In academic year 2002-2003 New Jersey Medical School developed a model that included both quantity and quality in the education metric and that was departmentally based. Eighty percent of the undergraduate medical education allocation was based on the quantity of undergraduate medical education taught by the department ($7.35 million), and 20% ($1.89 million) was allocated based on the quality of the education delivered. Quality determinations were made by the educational leadership based on student evaluations and departmental compliance with educational administrative requirements. Evolution of the model has included the development of a faculty oversight committee and the integration of peer evaluation in the determination of educational quality. Six departments had a documented increase in quality over time, and one department had a transient decrease in quality. The MBF model has been well accepted by chairs, educational leaders, and faculty and has been instrumental in enhancing the stature of education at our institution.

  13. D1.4 -- Short Report on Models That Incorporate Non-stationary Time Variant Effects

    DEFF Research Database (Denmark)

    Lostanlen, Yves; Pedersen, Troels; Steinboeck, Gerhard

    -to-indoor environments are presented. Furthermore, the impact of human activity on the time variation of the radio channel is investigated and first simulation results are presented. Movement models, which include realistic interaction between nodes, are part of current research activities....

  14. Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.

    Science.gov (United States)

    Buchal, Ralph O.

    2001-01-01

    Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)

  15. Incorporating unreliability of transit in transport demand models: theoretical and practical approach

    NARCIS (Netherlands)

    van Oort, N.; Brands, Ties; de Romph, E.; Aceves Flores, J.

    2014-01-01

    Nowadays, transport demand models do not explicitly evaluate the impacts of service reliability of transit. Service reliability of transit systems is adversely experienced by users, as it causes additional travel time and unsecure arrival times. Because of this, travelers are likely to perceive a

  16. Incorporating Lightning Flash Data into the WRF-CMAQ Modeling System: Algorithms and Evaluations

    Science.gov (United States)

    We describe the use of lightning flash data from the National Lightning Detection Network (NLDN) to constrain and improve the performance of coupled meteorology-chemistry models. We recently implemented a scheme in which lightning data is used to control the triggering of conve...

  17. Incorporating Logistics in Freight Transport Demand Models: State-of-the-Art and Research Opportunities

    NARCIS (Netherlands)

    Tavasszy, L.A.; Ruijgrok, K.; Davydenko, I.

    2012-01-01

    Freight transport demand is a demand derived from all the activities needed to move goods between locations of production to locations of consumption, including trade, logistics and transportation. A good representation of logistics in freight transport demand models allows us to predict the effects

  18. Incorporating prior information into differential network analysis using non-paranormal graphical models.

    Science.gov (United States)

    Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong

    2017-08-15

    Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. Decision model incorporating utility theory and measurement of social values applied to nuclear waste management

    International Nuclear Information System (INIS)

    Litchfield, J.W.; Hansen, J.V.; Beck, L.C.

    1975-07-01

    A generalized computer-based decision analysis model was developed and tested. Several alternative concepts for ultimate disposal have already been developed; however, significant research is still required before any of these can be implemented. To make a choice based on technical estimates of the costs, short-term safety, long-term safety, and accident detection and recovery requires estimating the relative importance of each of these factors or attributes. These relative importance estimates primarily involve social values and therefore vary from one individual to the next. The approach used was to sample various public groups to determine the relative importance of each of the factors to the public. These estimates of importance weights were combined in a decision analysis model with estimates, furnished by technical experts, of the degree to which each alternative concept achieves each of the criteria. This model then integrates the two separate and unique sources of information and provides the decision maker with information as to the preferences and concerns of the public as well as the technical areas within each concept which need further research. The model can rank the alternatives using sampled public opinion and techno-economic data. This model provides a decision maker with a structured approach to subdividing complex alternatives into a set of more easily considered attributes, measuring the technical performance of each alternative relative to each attribute, estimating relevant social values, and assimilating quantitative information in a rational manner to estimate total value for each alternative. Because of the explicit nature of this decision analysis, the decision maker can select a specific alternative supported by clear documentation and justification for his assumptions and estimates. (U.S.)

  20. Incorporating Ecosystem Processes Controlling Carbon Balance Into Models of Coupled Human-Natural Systems

    Science.gov (United States)

    Currie, W.; Brown, D. G.; Brunner, A.; Fouladbash, L.; Hadzick, Z.; Hutchins, M.; Kiger, S. E.; Makino, Y.; Nassauer, J. I.; Robinson, D. T.; Riolo, R. L.; Sun, S.

    2012-12-01

    A key element in the study of coupled human-natural systems is the interactions of human populations with vegetation and soils. In human-dominated landscapes, vegetation production and change results from a combination of ecological processes and human decision-making and behavior. Vegetation is often dramatically altered, whether to produce food for humans and livestock, to harvest fiber for construction and other materials, to harvest fuel wood or feedstock for biofuels, or simply for cultural preferences as in the case of residential lawns with sparse trees in the exurban landscape. This alteration of vegetation and its management has a substantial impact on the landscape carbon balance. Models can be used to simulate scenarios in human-natural systems and to examine the integration of processes that determine future trajectories of carbon balance. However, most models of human-natural systems include little integration of the human alteration of vegetation with the ecosystem processes that regulate carbon balance. Here we illustrate a few case studies of pilot-study models that strive for this integration from our research across various types of landscapes. We focus greater detail on a fully developed research model linked to a field study of vegetation and soils in the exurban residential landscape of Southeastern Michigan, USA. The field study characterized vegetation and soil carbon storage in 5 types of ecological zones. Field-observed carbon storage in the vegetation in these zones ranged widely, from 150 g C/m2 in turfgrass zones, to 6,000 g C/m2 in zones defined as turfgrass with sparse woody vegetation, to 16,000 g C/m2 in a zone defined as dense trees and shrubs. Use of these zones facilitated the scaling of carbon pools to the landscape, where the areal mixtures of zone types had a significant impact on landscape C storage. Use of these zones also facilitated the use of the ecosystem process model Biome-BGC to simulate C trajectories and also

  1. Incorporating imperfect detection into joint models of communites: A response to Warton et al.

    Science.gov (United States)

    Beissinger, Steven R.; Iknayan, Kelly J.; Guillera-Arroita, Gurutzeta; Zipkin, Elise; Dorazio, Robert; Royle, Andy; Kery, Marc

    2016-01-01

    Warton et al. [1] advance community ecology by describing a statistical framework that can jointly model abundances (or distributions) across many taxa to quantify how community properties respond to environmental variables. This framework specifies the effects of both measured and unmeasured (latent) variables on the abundance (or occurrence) of each species. Latent variables are random effects that capture the effects of both missing environmental predictors and correlations in parameter values among different species. As presented in Warton et al., however, the joint modeling framework fails to account for the common problem of detection or measurement errors that always accompany field sampling of abundance or occupancy, and are well known to obscure species- and community-level inferences.

  2. Incorporating Floating Surface Objects into a Fully Dispersive Surface Wave Model

    Science.gov (United States)

    2016-04-19

    Bateman c , Joseph Calantoni c , James T. Kirby b a NRL Code 7320, 1009 Balch Blvd, Stennis Space Center, MS 39529 USA b Center for Applied Coastal...wave prop- agation. J. Waterway Port Coast. Ocean Eng. 119, 618–638 . rzech, M., Shi, F., Calantoni, J., Bateman , S., Veeramony, J., 2014. Small-scale...F., Bateman , S., Calantoni, J., 2016. Modeling small- scale physics of waves and ice in the MIZ. AGU 2016 Ocean Sciences Meeting, Session 9483

  3. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    OpenAIRE

    Maithreyi Subramaniam; Jaffri Hanafi; Abu Talib Putih

    2016-01-01

    This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The...

  4. Incorporating social anxiety into a model of college student problematic drinking

    OpenAIRE

    Ham, Lindsay S.; Hope, Debra A.

    2005-01-01

    College problem drinking and social anxiety are significant public health concerns with highly negative consequences. College students are faced with a variety of novel social situations and situations encouraging alcohol consumption. The current study involved developing a path model of college problem drinking, including social anxiety, in 316 college students referred to an alcohol intervention due to a campus alcohol violation. Contrary to hypotheses, social anxiety generally had an inver...

  5. Incorporating driver distraction in car-following models: Applying the TCI to the IDM

    OpenAIRE

    Hoogendoorn, R.G.; van Arem, B.; Hoogendoorn, S.P.

    2013-01-01

    ITS can play a significant role in the improvement of traffic flow, traffic safety and greenhouse gas emissions. However, the implementation of Advanced Driver Assistance Systems may lead to adaptation effects in longitudinal driving behavior following driver distraction. It was however not yet clear how to model these adaptation effects in driving behavior mathematically and on which theoretical framework this should be grounded. To this end in this contribution we introduce a theoretical fr...

  6. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  7. A Compliant Bistable Mechanism Design Incorporating Elastica Buckling Beam Theory and Pseudo-Rigid-Body Model

    DEFF Research Database (Denmark)

    Sönmez, Ümit; Tutum, Cem Celal

    2008-01-01

    In this work, a new compliant bistable mechanism design is introduced. The combined use of pseudo-rigid-body model (PRBM) and the Elastica buckling theory is presented for the first time to analyze the new design. This mechanism consists of the large deflecting straight beams, buckling beams...... and the buckling Elastica solution for an original compliant mechanism kinematic analysis. New compliant mechanism designs are presented to highlight where such combined kinematic analysis is required....

  8. Terrestrial Feedbacks Incorporated in Global Vegetation Models through Observed Trait-Environment Responses

    Science.gov (United States)

    Bodegom, P. V.

    2015-12-01

    Most global vegetation models used to evaluate climate change impacts rely on plant functional types to describe vegetation responses to environmental stresses. In a traditional set-up in which vegetation characteristics are considered constant within a vegetation type, the possibility to implement and infer feedback mechanisms are limited as feedback mechanisms will likely involve a changing expression of community trait values. Based on community assembly concepts, we implemented functional trait-environment relationships into a global dynamic vegetation model to quantitatively assess this feature. For the current climate, a different global vegetation distribution was calculated with and without the inclusion of trait variation, emphasizing the importance of feedbacks -in interaction with competitive processes- for the prevailing global patterns. These trait-environmental responses do, however, not necessarily imply adaptive responses of vegetation to changing conditions and may locally lead to a faster turnover in vegetation upon climate change. Indeed, when running climate projections, simulations with trait variation did not yield a more stable or resilient vegetation than those without. Through the different feedback expressions, global and regional carbon and water fluxes were -however- strongly altered. At a global scale, model projections suggest an increased productivity and hence an increased carbon sink in the next decades to come, when including trait variation. However, by the end of the century, a reduced carbon sink is projected. This effect is due to a downregulation of photosynthesis rates, particularly in the tropical regions, even when accounting for CO2-fertilization effects. Altogether, the various global model simulations suggest the critical importance of including vegetation functional responses to changing environmental conditions to grasp terrestrial feedback mechanisms at global scales in the light of climate change.

  9. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    International Nuclear Information System (INIS)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs

  10. Incorporating Neighborhood Choice in a Model of Neighborhood Effects on Income.

    Science.gov (United States)

    van Ham, Maarten; Boschman, Sanne; Vogel, Matt

    2018-05-09

    Studies of neighborhood effects often attempt to identify causal effects of neighborhood characteristics on individual outcomes, such as income, education, employment, and health. However, selection looms large in this line of research, and it has been argued that estimates of neighborhood effects are biased because people nonrandomly select into neighborhoods based on their preferences, income, and the availability of alternative housing. We propose a two-step framework to disentangle selection processes in the relationship between neighborhood deprivation and earnings. We model neighborhood selection using a conditional logit model, from which we derive correction terms. Driven by the recognition that most households prefer certain types of neighborhoods rather than specific areas, we employ a principle components analysis to reduce these terms into eight correction components. We use these to adjust parameter estimates from a model of subsequent neighborhood effects on individual income for the unequal probability that a household chooses to live in a particular type of neighborhood. We apply this technique to administrative data from the Netherlands. After we adjust for the differential sorting of households into certain types of neighborhoods, the effect of neighborhood income on individual income diminishes but remains significant. These results further emphasize that researchers need to be attuned to the role of selection bias when assessing the role of neighborhood effects on individual outcomes. Perhaps more importantly, the persistent effect of neighborhood deprivation on subsequent earnings suggests that neighborhood effects reflect more than the shared characteristics of neighborhood residents: place of residence partially determines economic well-being.

  11. Incorporating institutions and collective action into a sociohydrological model of flood resilience

    Science.gov (United States)

    Yu, David J.; Sangwan, Nikhil; Sung, Kyungmin; Chen, Xi; Merwade, Venkatesh

    2017-02-01

    Stylized sociohydrological models have mainly used social memory aspects such as community awareness or sensitivity to connect hydrologic change and social response. However, social memory alone does not satisfactorily capture the details of how human behavior is translated into collective action for water resources governance. Nor is it the only social mechanism by which the two-way feedbacks of sociohydrology can be operationalized. This study contributes toward bridging of this gap by developing a sociohydrological model of a flood resilience that includes two additional components: (1) institutions for collective action, and (2) connections to an external economic system. Motivated by the case of community-managed flood protection systems (polders) in coastal Bangladesh, we use the model to understand critical general features that affect long-term resilience of human-flood systems. Our findings suggest that occasional adversity can enhance long-term resilience. Allowing some hydrological variability to enter into the polder can increase its adaptive capacity for resilience through the preservation of social norm for collective action. Further, there are potential trade-offs associated with optimization of flood resistance through structural measures. By reducing sensitivity to floods, the system may become more fragile under the double impact of floods and economic change.

  12. Incorporation of β-glucans in meat emulsions through an optimal mixture modeling systems.

    Science.gov (United States)

    Vasquez Mejia, Sandra M; de Francisco, Alicia; Manique Barreto, Pedro L; Damian, César; Zibetti, Andre Wüst; Mahecha, Hector Suárez; Bohrer, Benjamin M

    2018-05-22

    The effects of β-glucans (βG) in beef emulsions with carrageenan and starch were evaluated using an optimal mixture modeling system. The best mathematical models to describe the cooking loss, color, and textural profile analysis (TPA) were selected and optimized. The cubic models were better to describe the cooking loss, color, and TPA parameters, with the exception of springiness. Emulsions with greater levels of βG and starch had less cooking loss (54 and <62), and greater hardness, cohesiveness and springiness values. Subsequently, during the optimization phase, the use of carrageenan was eliminated. The optimized emulsion contained 3.13 ± 0.11% βG, which could cover the intake daily of βG recommendations. However, the hardness of the optimized emulsion was greater (60,224 ± 1025 N) than expected. The optimized emulsion had a homogeneous structure and normal thermal behavior by DSC and allowed for the manufacture of products with high amounts of βG and desired functional attributes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Absorbed dose evaluation based on a computational voxel model incorporating distinct cerebral structures

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Samia de Freitas; Trindade, Bruno; Campos, Tarcisio P.R. [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil)]. E-mail: samiabrandao@gmail.com; bmtrindade@yahoo.com; campos@nuclear.ufmg.br

    2007-07-01

    Brain tumors are quite difficult to treat due to the collateral radiation damages produced on the patients. Despite of the improvements in the therapeutics protocols for this kind of tumor, involving surgery and radiotherapy, the failure rate is still extremely high. This fact occurs because tumors can not often be totally removed by surgery since it may produce some type of deficit in the cerebral functions. Radiotherapy is applied after the surgery, and both are palliative treatments. During radiotherapy the brain does not absorb the radiation dose in homogeneous way, because the various density and chemical composition of tissues involved. With the intention of evaluating better the harmful effects caused by radiotherapy it was developed an elaborated cerebral voxel model to be used in computational simulation of the irradiation protocols of brain tumors. This paper presents some structures function of the central nervous system and a detailed cerebral voxel model, created in the SISCODES program, considering meninges, cortex, gray matter, white matter, corpus callosum, limbic system, ventricles, hypophysis, cerebellum, brain stem and spinal cord. The irradiation protocol simulation was running in the MCNP5 code. The model was irradiated with photons beam whose spectrum simulates a linear accelerator of 6 MV. The dosimetric results were exported to SISCODES, which generated the isodose curves for the protocol. The percentage isodose curves in the brain are present in this paper. (author)

  14. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs.

  15. Biotransformation model of neutral and weakly polar organic compounds in fish incorporating internal partitioning.

    Science.gov (United States)

    Kuo, Dave T F; Di Toro, Dominic M

    2013-08-01

    A model for whole-body in vivo biotransformation of neutral and weakly polar organic chemicals in fish is presented. It considers internal chemical partitioning and uses Abraham solvation parameters as reactivity descriptors. It assumes that only chemicals freely dissolved in the body fluid may bind with enzymes and subsequently undergo biotransformation reactions. Consequently, the whole-body biotransformation rate of a chemical is retarded by the extent of its distribution in different biological compartments. Using a randomly generated training set (n = 64), the biotransformation model is found to be: log (HLφfish ) = 2.2 (±0.3)B - 2.1 (±0.2)V - 0.6 (±0.3) (root mean square error of prediction [RMSE] = 0.71), where HL is the whole-body biotransformation half-life in days, φfish is the freely dissolved fraction in body fluid, and B and V are the chemical's H-bond acceptance capacity and molecular volume. Abraham-type linear free energy equations were also developed for lipid-water (Klipidw ) and protein-water (Kprotw ) partition coefficients needed for the computation of φfish from independent determinations. These were found to be 1) log Klipidw  = 0.77E - 1.10S - 0.47A - 3.52B + 3.37V + 0.84 (in Lwat /kglipid ; n = 248, RMSE = 0.57) and 2) log Kprotw  = 0.74E - 0.37S - 0.13A - 1.37B + 1.06V - 0.88 (in Lwat /kgprot ; n = 69, RMSE = 0.38), where E, S, and A quantify dispersive/polarization, dipolar, and H-bond-donating interactions, respectively. The biotransformation model performs well in the validation of HL (n = 424, RMSE = 0.71). The predicted rate constants do not exceed the transport limit due to circulatory flow. Furthermore, the model adequately captures variation in biotransformation rate between chemicals with varying log octanol-water partitioning coefficient, B, and V and exhibits high degree of independence from the choice of training chemicals. The

  16. Incorporating Water Boiling in the Numerical Modelling of Thermal Remediation by Electrical Resistance Heating

    Science.gov (United States)

    Molnar, I. L.; Krol, M.; Mumford, K. G.

    2017-12-01

    Developing numerical models for subsurface thermal remediation techniques - such as Electrical Resistive Heating (ERH) - that include multiphase processes such as in-situ water boiling, gas production and recovery has remained a significant challenge. These subsurface gas generation and recovery processes are driven by physical phenomena such as discrete and unstable gas (bubble) flow as well as water-gas phase mass transfer rates during bubble flow. Traditional approaches to multiphase flow modeling soil remain unable to accurately describe these phenomena. However, it has been demonstrated that Macroscopic Invasion Percolation (MIP) can successfully simulate discrete and unstable gas transport1. This has lead to the development of a coupled Electro Thermal-MIP Model2 (ET-MIP) capable of simulating multiple key processes in the thermal remediation and gas recovery process including: electrical heating of soil and groundwater, water flow, geological heterogeneity, heating-induced buoyant flow, water boiling, gas bubble generation and mobilization, contaminant mass transport and removal, and additional mechanisms such as bubble collapse in cooler regions. This study presents the first rigorous validation of a coupled ET-MIP model against two-dimensional water boiling and water/NAPL co-boiling experiments3. Once validated, the model was used to explore the impact of water and co-boiling events and subsequent gas generation and mobilization on ERH's ability to 1) generate, expand and mobilize gas at boiling and NAPL co-boiling temperatures, 2) efficiently strip contaminants from soil during both boiling and co-boiling. In addition, a quantification of the energy losses arising from steam generation during subsurface water boiling was examined with respect to its impact on the efficacy of thermal remediation. While this study specifically targets ERH, the study's focus on examining the fundamental mechanisms driving thermal remediation (e.g., water boiling) renders

  17. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    International Nuclear Information System (INIS)

    Yuan, Lulin; Wu, Q. Jackie; Yin, Fang-Fang; Yoo, David; Jiang, Yuliang; Ge, Yaorong

    2014-01-01

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trained with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined

  18. A Refined Model for the Structure of Acireductone Dioxygenase from Klebsiella ATCC 8724 Incorporating Residual Dipolar Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Pochapsky, Thomas C., E-mail: pochapsk@brandeis.edu; Pochapsky, Susan S.; Ju Tingting [Brandeis University, Department of Chemistry (United States); Hoefler, Chris [Brandeis University, Department of Biochemistry (United States); Liang Jue [Brandeis University, Department of Chemistry (United States)

    2006-02-15

    Acireductone dioxygenase (ARD) from Klebsiella ATCC 8724 is a metalloenzyme that is capable of catalyzing different reactions with the same substrates (acireductone and O{sub 2}) depending upon the metal bound in the active site. A model for the solution structure of the paramagnetic Ni{sup 2+}-containing ARD has been refined using residual dipolar couplings (RDCs) measured in two media. Additional dihedral restraints based on chemical shift (TALOS) were included in the refinement, and backbone structure in the vicinity of the active site was modeled from a crystallographic structure of the mouse homolog of ARD. The incorporation of residual dipolar couplings into the structural refinement alters the relative orientations of several structural features significantly, and improves local secondary structure determination. Comparisons between the solution structures obtained with and without RDCs are made, and structural similarities and differences between mouse and bacterial enzymes are described. Finally, the biological significance of these differences is considered.

  19. Incorporating the user perspective into a proposed model for assessing success of SHS implementations

    Directory of Open Access Journals (Sweden)

    Hans Holtorf

    2015-10-01

    Full Text Available Modern energy can contribute to development in multiple ways while approximately 20% of world's populations do not yet have access to electricity. Solar Home Systems (SHSs consists of a PV module, a charge controller and a battery supply in the range of 100 Wh/d in Sunbelt countries. The question addressed in this paper is how SHS users approach success of their systems and how these user's views can be integrated in to an existing model of success. Information was obtained on the user's approach to their SHSs by participatory observation, interviews with users and by self-observation undertaken by the lead author while residing under SHS electricity supply conditions. It was found that success of SHSs from the users' point of view is related to the ability of these systems to reduce the burdens of supplying energy services to homesteads. SHSs can alleviate some energy supply burdens, and they can improve living conditions by enabling communication on multiple levels and by addressing convenience and safety concerns. However, SHSs do not contribute to the energy services which are indispensable for survival, nor to the thermal energy services required and desired in dwellings of Sunbelt countries. The elements of three of the four components of our previously proposed model of success have been verified and found to be appropriate, namely the user's self-set goals, their importance and SHSs' success factors. The locally appropriate, and scientifically satisfactory, measurement of the level of achievement of self-set goals, the fourth component of our model of success, remains an interesting area for future research.

  20. The Eatwell Guide: Modelling the Health Implications of Incorporating New Sugar and Fibre Guidelines.

    Directory of Open Access Journals (Sweden)

    Linda J Cobiac

    Full Text Available To model population health impacts of dietary changes associated with the redevelopment of the UK food-based dietary guidelines (the 'Eatwell Guide'.Using multi-state lifetable methods, we modelled the impact of dietary changes on cardiovascular disease, diabetes and cancers over the lifetime of the current UK population. From this model, we determined change in life expectancy and disability-adjusted life years (DALYs that could be averted.Changing the average diet to that recommended in the new Eatwell Guide, without increasing total energy intake, could increase average life expectancy by 5.4 months (95% uncertainty interval: 4.7 to 6.2 for men and 4.0 months (3.4 to 4.6 for women; and avert 17.9 million (17.6 to 18.2 DALYs over the lifetime of the current population. A large proportion of the health benefits are from prevention of type 2 diabetes, with 440,000 (400,000 to 480,000 new cases prevented in men and 340,000 (310,000 to 370,000 new cases prevented in women, over the next ten years. Prevention of cardiovascular diseases and colorectal cancer is also large. However, if the diet recommended in the new Eatwell Guide is achieved with an accompanying increase in energy intake (and thus an increase in body mass index, around half the potential improvements in population health will not be realised.The dietary changes required to meet recommendations in the Eatwell Guide, which include eating more fruits and vegetables and less red and processed meats and dairy products, are large. However, the potential population health benefits are substantial.

  1. A porcine model of bladder outlet obstruction incorporating radio-telemetered cystometry.

    Science.gov (United States)

    Shaw, Matthew B; Herndon, Claude D; Cain, Mark P; Rink, Richard C; Kaefer, Martin

    2007-07-01

    To present a novel porcine model of bladder outlet obstruction (BOO) with a standardized bladder outlet resistance and real-time ambulatory radio-telemetered cystometry, as BOO is a common condition with many causes in both adults and children, with significant morbidity and occasional mortality, but attempts to model this condition in many animal models have the fundamental problem of standardising the degree of outlet resistance. BOO was created in nine castrated male pigs by dividing the mid-urethra; outflow was allowed through an implanted bladder drainage catheter containing a resistance valve, allowing urine to flow across the valve only when a set pressure differential was generated across the valve. An implantable radio-telemetered pressure sensor monitored the pressure within the bladder and abdominal cavity, and relayed this information to a remote computer. Four control pigs had an occluded bladder drainage catheter and pressure sensor placed, but were allowed to void normally through the native urethra. Intra-vesical pressure was monitored by telemetry, while the resistance valve was increased weekly, beginning with 2 cmH2O and ultimately reaching 10 cmH2O. The pigs were assessed using conventional cystometry under anaesthesia before death, and samples conserved in formalin for haematoxylin and eosin staining. The pigs had radio-telemetered cystometry for a median of 26 days. All telemetry implants functioned well for the duration of the experiment, but one pig developed a urethral fistula and was excluded from the study. With BOO the bladder mass index (bladder mass/body mass x 10 000) increased from 9.7 to 20 (P = 0.004), with a significant degree of hypertrophy of the detrusor smooth muscle bundles. Obstructed bladders were significantly less compliant than control bladders (8.3 vs 22.1 mL/cmH2O, P = 0.03). Telemetric cystometry showed that there was no statistically significance difference in mean bladder pressure between obstructed and control pigs

  2. Procurement-distribution model for perishable items with quantity discounts incorporating freight policies under fuzzy environment

    Directory of Open Access Journals (Sweden)

    Makkar Sandhya

    2013-01-01

    Full Text Available A significant issue of the supply chain problem is how to integrate different entities. Managing supply chain is a difficult task because of complex integrations, especially when the products are perishable in nature. Little attention has been paid on ordering specific perishable products jointly in uncertain environment with multiple sources and multiple destinations. In this article, we propose a supply chain coordination model through quantity and freight discount policy for perishable products under uncertain cost and demand information. A case is provided to validate the procedure.

  3. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    Science.gov (United States)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  4. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    International Nuclear Information System (INIS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-01-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  5. Incorporation prior belief in the general path model: A comparison of information sources

    International Nuclear Information System (INIS)

    Coble, Jamie; Hines, Wesley

    2014-01-01

    The general path model (GPM) is one approach for performing degradation-based, or Type III, prognostics. The GPM fits a parametric function to the collected observations of a prognostic parameter and extrapolates the fit to a failure threshold. This approach has been successfully applied to a variety of systems when a sufficient number of prognostic parameter observations are available. However, the parametric fit can suffer significantly when few data are available or the data are very noisy. In these instances, it is beneficial to include additional information to influence the fit to conform to a prior belief about the evolution of system degradation. Bayesian statistical approaches have been proposed to include prior information in the form of distributions of expected model parameters. This requires a number of run-to-failure cases with tracked prognostic parameters; these data may not be readily available for many systems. Reliability information and stressor-based (Type I and Type II, respectively) prognostic estimates can provide the necessary prior belief for the GPM. This article presents the Bayesian updating framework to include prior information in the GPM and compares the efficacy of including different information sources on two data sets.

  6. Study of an intraurban travel demand model incorporating commuter preference variables

    Science.gov (United States)

    Holligan, P. E.; Coote, M. A.; Rushmer, C. R.; Fanning, M. L.

    1971-01-01

    The model is based on the substantial travel data base for the nine-county San Francisco Bay Area, provided by the Metropolitan Transportation Commission. The model is of the abstract type, and makes use of commuter attitudes towards modes and simple demographic characteristics of zones in a region to predict interzonal travel by mode for the region. A characterization of the STOL/VTOL mode was extrapolated by means of a subjective comparison of its expected characteristics with those of modes characterized by the survey. Predictions of STOL demand were made for the Bay Area and an aircraft network was developed to serve this demand. When this aircraft system is compared to the base case system, the demand for STOL service has increased five fold and the resulting economics show considerable benefit from the increased scale of operations. In the previous study all systems required subsidy in varying amounts. The new system shows a substantial profit at an average fare of $3.55 per trip.

  7. An expanded Notch-Delta model exhibiting long-range patterning and incorporating MicroRNA regulation.

    Directory of Open Access Journals (Sweden)

    Jerry S Chen

    2014-06-01

    Full Text Available Notch-Delta signaling is a fundamental cell-cell communication mechanism that governs the differentiation of many cell types. Most existing mathematical models of Notch-Delta signaling are based on a feedback loop between Notch and Delta leading to lateral inhibition of neighboring cells. These models result in a checkerboard spatial pattern whereby adjacent cells express opposing levels of Notch and Delta, leading to alternate cell fates. However, a growing body of biological evidence suggests that Notch-Delta signaling produces other patterns that are not checkerboard, and therefore a new model is needed. Here, we present an expanded Notch-Delta model that builds upon previous models, adding a local Notch activity gradient, which affects long-range patterning, and the activity of a regulatory microRNA. This model is motivated by our experiments in the ascidian Ciona intestinalis showing that the peripheral sensory neurons, whose specification is in part regulated by the coordinate activity of Notch-Delta signaling and the microRNA miR-124, exhibit a sparse spatial pattern whereby consecutive neurons may be spaced over a dozen cells apart. We perform rigorous stability and bifurcation analyses, and demonstrate that our model is able to accurately explain and reproduce the neuronal pattern in Ciona. Using Monte Carlo simulations of our model along with miR-124 transgene over-expression assays, we demonstrate that the activity of miR-124 can be incorporated into the Notch decay rate parameter of our model. Finally, we motivate the general applicability of our model to Notch-Delta signaling in other animals by providing evidence that microRNAs regulate Notch-Delta signaling in analogous cell types in other organisms, and by discussing evidence in other organisms of sparse spatial patterns in tissues where Notch-Delta signaling is active.

  8. The effect of intra-abdominal hypertension incorporating severe acute pancreatitis in a porcine model.

    Directory of Open Access Journals (Sweden)

    Lu Ke

    Full Text Available INTRODUCTION: Abdominal compartment syndrome (ACS and intra abdominal hypertension(IAH are common clinical findings in patients with severe acute pancreatitis(SAP. It is thought that an increased intra abdominal pressure(IAP is associated with poor prognosis in SAP patients. But the detailed effect of IAH/ACS on different organ system is not clear. The aim of this study was to assess the effect of SAP combined with IAH on hemodynamics, systemic oxygenation, and organ damage in a 12 h lasting porcine model. MEASUREMENTS AND METHODS: Following baseline registrations, a total of 30 animals were divided into 5 groups (6 animals in each group: SAP+IAP30 group, SAP+IAP20 group, SAP group, IAP30 group(sham-operated but without SAP and sham-operated group. We used a N(2 pneumoperitoneum to induce different levels of IAH and retrograde intra-ductal infusion of sodium taurocholate to induce SAP. The investigation period was 12 h. Hemodynamic parameters (CO, HR, MAP, CVP, urine output, oxygenation parameters(e.g., S(vO(2, PO(2, PaCO(2, peak inspiratory pressure, as well as serum parameters (e.g., ALT, amylase, lactate, creatinine were recorded. Histological examination of liver, intestine, pancreas, and lung was performed. MAIN RESULTS: Cardiac output significantly decreased in the SAP+IAH animals compared with other groups. Furthermore, AST, creatinine, SUN and lactate showed similar increasing tendency paralleled with profoundly decrease in S(vO(2. The histopathological analyses also revealed higher grade injury of liver, intestine, pancreas and lung in the SAP+IAH groups. However, few differences were found between the two SAP+IAH groups with different levels of IAP. CONCLUSIONS: Our newly developed porcine SAP+IAH model demonstrated that there were remarkable effects on global hemodynamics, oxygenation and organ function in response to sustained IAH of 12 h combined with SAP. Moreover, our model should be helpful to study the mechanisms of IAH

  9. Teaching For Art Criticism: Incorporating Feldman’s Critical Analysis Learning Model In Students’ Studio Practice

    Directory of Open Access Journals (Sweden)

    Maithreyi Subramaniam

    2016-01-01

    Full Text Available This study adopted 30 first year graphic design students’ artwork, with critical analysis using Feldman’s model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students’ performances in their critical ability. Pearson Correlation Coefficient was used to find out the correlation between students’ studio practice and art critical ability scores. The findings showed most students performed slightly better than average in the critical analyses and performed best in selecting analysis among the four dimensions assessed. In the context of the students’ studio practice and critical ability, findings showed there are some connections between the students’ art critical ability and studio practice.

  10. Etoposide Incorporated into Camel Milk Phospholipids Liposomes Shows Increased Activity against Fibrosarcoma in a Mouse Model

    Directory of Open Access Journals (Sweden)

    Hamzah M. Maswadeh

    2015-01-01

    Full Text Available Phospholipids were isolated from camel milk and identified by using high performance liquid chromatography and gas chromatography-mass spectrometry (GC/MS. Anticancer drug etoposide (ETP was entrapped in liposomes, prepared from camel milk phospholipids, to determine its activity against fibrosarcoma in a murine model. Fibrosarcoma was induced in mice by injecting benzopyrene (BAP and tumor-bearing mice were treated with various formulations of etoposide, including etoposide entrapped camel milk phospholipids liposomes (ETP-Cam-liposomes and etoposide-loaded DPPC-liposomes (ETP-DPPC-liposomes. The tumor-bearing mice treated with ETP-Cam-liposomes showed slow progression of tumors and increased survival compared to free ETP or ETP-DPPC-liposomes. These results suggest that ETP-Cam-liposomes may prove to be a better drug delivery system for anticancer drugs.

  11. Atomistic modelling study of lanthanide incorporation in the crystal lattice of an apatite

    International Nuclear Information System (INIS)

    Louis-Achille, V.

    1999-01-01

    Studies of natural and synthetic apatites allow to propose such crystals as matrix for nuclear waste storage. The neodymium substituted britholite, Ca 9 Nd(PO 4 ) 5 (SiO 4 )F 2 . is a model for the trivalent actinide storage Neodymium can be substituted in two types of sites. The aim of this thesis is to compare the chemical nature of this two sites in fluoro-apatite Ca 9 (PO 4 ) 6 F 2 and then in britholite, using ab initio atomistic modeling. Two approaches are used: one considers the infinite crystals and the second considers clusters. The calculations of the electronic structure for both were performed using Kohn and Sham density functional theory in the local approximation. For solids, pseudopotentials were used, and wave functions are expanded in plane waves. For clusters, a frozen core approximation was used, and the wave functions are expanded in a linear combination of Slater type atomic orbitals. The pseudopotential is semi-relativistic for neodymium, and the Hamiltonian is scalar relativistic for the clusters. The validation of the solid approach is performed using two test cases: YPO 4 and ScPO 4 . Two numerical tools were developed to compute electronic deformation density map, and calculate partial density of stases. A full optimisation of the lattice parameters with a relaxation of the atomic coordinates leads to correct structural and thermodynamic properties for the fluoro-apatite, compared to experience. The electronic deformation density maps do not show any significant differences. between the two calcium sites. but Mulliken analysis on the solid and on the clusters point out the more ionic behavior of the calcium in site 2. A neodymium substituted britholite is then studied. Neodymium location only induces local modifications in; the crystalline structure and few changes in the formation enthalpy. The electronic study points out an increase of the covalent character the bonding involving neodymium compared with the one related to calcium

  12. An evaluation of a paediatric radiation oncology teaching programme incorporating a SCORPIO teaching model

    International Nuclear Information System (INIS)

    Ahern, Verity

    2011-01-01

    Full text: Many radiation oncology registrars have no exposure to paedi atrics during their training, To address this, the Paediatric Special Interest Group of the Royal Australian and New Zealand College of Radiologists has convened a biennial teaching course since 1997. The 2009 course incorpo rated the use of a Structured, Clinical, Objective-Referenced, Problem orientated, Integrated and Organized (SCORPIO) teaching model for small group tutorials. This study evaluates whether the paediatric radiation oncol ogy curriculum can be adapted to the SCORPIO teaching model and to evaluate the revised course from the registrars' perspective. Methods: Teaching and learning resources included a pre-course reading list, a lecture series programme and a SCORPIO workshop. Three evaluation instruments were developed: an overall Course Evaluation Survey for all participants, a SCORPIO Workshop Survey for registrars and a Teacher's SCORPIO Workshop Survey. Results: Forty-five radiation oncology registrars, 14 radiation therapists and five paediatric oncology registrars attended. Seventy-three per cent (47/64) of all participants completed the Course Evaluation Survey and 95% (38/40) of registrars completed the SCORPIO Workshop Survey. All teachers com pleted the Teacher's SCORPIO Survey (10/10). The overall educational expe rience was rated as good or excellent by 93% (43/47) of respondents. Ratings of satisfaction with lecture sessions were predominantly good or excellent. Registrars gave the SCORPIO workshop high ratings on each of 10 aspects of quality, with 82% allocating an excellent rating overall for the SCORPIO activity. Both registrars and teachers recommended more time for the SCORPIO stations. Conclusions: The 2009 course met the educational needs of the radiation oncology registrars and the SCORPIO workshop was a highly valued educa tional component.

  13. What does optimization theory actually predict about crown profiles of photosynthetic capacity when models incorporate greater realism?

    Science.gov (United States)

    Buckley, Thomas N; Cescatti, Alessandro; Farquhar, Graham D

    2013-08-01

    Measured profiles of photosynthetic capacity in plant crowns typically do not match those of average irradiance: the ratio of capacity to irradiance decreases as irradiance increases. This differs from optimal profiles inferred from simple models. To determine whether this could be explained by omission of physiological or physical details from such models, we performed a series of thought experiments using a new model that included more realism than previous models. We used ray-tracing to simulate irradiance for 8000 leaves in a horizontally uniform canopy. For a subsample of 500 leaves, we simultaneously optimized both nitrogen allocation (among pools representing carboxylation, electron transport and light capture) and stomatal conductance using a transdermally explicit photosynthesis model. Few model features caused the capacity/irradiance ratio to vary systematically with irradiance. However, when leaf absorptance varied as needed to optimize distribution of light-capture N, the capacity/irradiance ratio increased up through the crown - that is, opposite to the observed pattern. This tendency was counteracted by constraints on stomatal or mesophyll conductance, which caused chloroplastic CO(2) concentration to decline systematically with increasing irradiance. Our results suggest that height-related constraints on stomatal conductance can help to reconcile observations with the hypothesis that photosynthetic N is allocated optimally. © 2013 John Wiley & Sons Ltd.

  14. Incorporation of cooling-induced crystallisation into a 2-dimensional axisymmetric conduit heat flow model

    Science.gov (United States)

    Heptinstall, D. A.; Neuberg, J. W.; Bouvet de Maisonneuve, C.; Collinson, A.; Taisne, B.; Morgan, D. J.

    2015-12-01

    Heat flow models can bring new insights into the thermal and rheological evolution of volcanic systems. We shall investigate the thermal processes and timescales in a crystallizing, static magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/°C (runs 1 & 3) and 0.2MPa/°C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69e5 J/kg*K, 9.32e5 J/kg*K, and 9.49e5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the center of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10 m depth, it takes 4.1-9.2 years for the magma column to cool over 108-130oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and dominant latent heat producing crystallizing phases (Quartz), where run 1 cools fastest and run 3 cools slowest. Surface cooling by comparison has the strongest influence on the upper tens of meters in all

  15. Incorporation of cooling-induced crystallization into a 2-dimensional axisymmetric conduit heat flow model

    Science.gov (United States)

    Heptinstall, David; Bouvet de Maisonneuve, Caroline; Neuberg, Jurgen; Taisne, Benoit; Collinson, Amy

    2016-04-01

    Heat flow models can bring new insights into the thermal and rheological evolution of volcanic 3 systems. We shall investigate the thermal processes and timescales in a crystallizing, static 4 magma column, with a heat flow model of Soufriere Hills Volcano (SHV), Montserrat. The latent heat of crystallization is initially computed with MELTS, as a function of pressure and temperature for an andesitic melt (SHV groundmass starting composition). Three fractional crystallization simulations are performed; two with initial pressures of 34MPa (runs 1 & 2) and one of 25MPa (run 3). Decompression rate was varied between 0.1MPa/° C (runs 1 & 3) and 0.2MPa/° C (run 2). Natural and experimental matrix glass compositions are accurately reproduced by all MELTS runs. The cumulative latent heat released for runs 1, 2 and 3 differs by less than 9% (8.69E5 J/kg*K, 9.32E5 J/kg*K, and 9.49E5 J/kg*K respectively). The 2D axisymmetric conductive cooling simulations consider a 30m-diameter conduit that extends from the surface to a depth of 1500m (34MPa). The temporal evolution of temperature is closely tracked at depths of 10m, 750m and 1400m in the centre of the conduit, at the conduit walls, and 20m from the walls into the host rock. Following initial cooling by 7-15oC at 10m depth inside the conduit, the magma temperature rebounds through latent heat release by 32-35oC over 85-123 days to a maximum temperature of 1002-1005oC. At 10m depth, it takes 4.1-9.2 years for the magma column to cool by 108-131oC and crystallize to 75wt%, at which point it cannot be easily remobilized. It takes 11-31.5 years to reach the same crystallinity at 750-1400m depth. We find a wide range in cooling timescales, particularly at depths of 750m or greater, attributed to the initial run pressure and the dominant latent heat producing crystallizing phase, Albite-rich Plagioclase Feldspar. Run 1 is shown to cool fastest and run 3 cool the slowest, with surface emissivity having the strongest cooling

  16. Incorporating the CALPHAD sublattice approach of ordering into the phase-field model with finite interface dissipation

    International Nuclear Information System (INIS)

    Zhang, Lijun; Stratmann, Matthias; Du, Yong; Sundman, Bo; Steinbach, Ingo

    2015-01-01

    A new approach to incorporate the sublattice models in the CALPHAD (CALculation of PHAse Diagram) formalism directly into the phase-field formalism is developed. In binary alloys, the sublattice models can be classified into two types (i.e., “Type I” and “Type II”), depending on whether a direct one-to-one relation between the element site fraction in the CALPHAD database and the phase concentration in the phase-field model exists (Type I), or not (Type II). For “Type II” sublattice models, the specific site fractions, corresponding to a given mole fraction, have to be established via internal relaxation between different sublattices. Internal minimization of sublattice occupancy and solute evolution during microstructure transformation leads, in general, to a solution superior to the separate solution of the individual problems. The present coupling technique is validated for Fe–C and Ni–Al alloys. Finally, the model is extended into multicomponent alloys and applied to simulate the nucleation process of VC monocarbide from austenite matrix in a steel containing vanadium

  17. Incorporation of expert variability into breast cancer treatment recommendation in designing clinical protocol guided fuzzy rule system models.

    Science.gov (United States)

    Garibaldi, Jonathan M; Zhou, Shang-Ming; Wang, Xiao-Ying; John, Robert I; Ellis, Ian O

    2012-06-01

    It has been often demonstrated that clinicians exhibit both inter-expert and intra-expert variability when making difficult decisions. In contrast, the vast majority of computerized models that aim to provide automated support for such decisions do not explicitly recognize or replicate this variability. Furthermore, the perfect consistency of computerized models is often presented as a de facto benefit. In this paper, we describe a novel approach to incorporate variability within a fuzzy inference system using non-stationary fuzzy sets in order to replicate human variability. We apply our approach to a decision problem concerning the recommendation of post-operative breast cancer treatment; specifically, whether or not to administer chemotherapy based on assessment of five clinical variables: NPI (the Nottingham Prognostic Index), estrogen receptor status, vascular invasion, age and lymph node status. In doing so, we explore whether such explicit modeling of variability provides any performance advantage over a more conventional fuzzy approach, when tested on a set of 1310 unselected cases collected over a fourteen year period at the Nottingham University Hospitals NHS Trust, UK. The experimental results show that the standard fuzzy inference system (that does not model variability) achieves overall agreement to clinical practice around 84.6% (95% CI: 84.1-84.9%), while the non-stationary fuzzy model can significantly increase performance to around 88.1% (95% CI: 88.0-88.2%), psystems in any application domain. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Computational cardiology: the bidomain based modified Hill model incorporating viscous effects for cardiac defibrillation

    Science.gov (United States)

    Cansız, Barış; Dal, Hüsnü; Kaliske, Michael

    2017-10-01

    Working mechanisms of the cardiac defibrillation are still in debate due to the limited experimental facilities and one-third of patients even do not respond to cardiac resynchronization therapy. With an aim to develop a milestone towards reaching the unrevealed mechanisms of the defibrillation phenomenon, we propose a bidomain based finite element formulation of cardiac electromechanics by taking into account the viscous effects that are disregarded by many researchers. To do so, the material is deemed as an electro-visco-active material and described by the modified Hill model (Cansız et al. in Comput Methods Appl Mech Eng 315:434-466, 2017). On the numerical side, we utilize a staggered solution method, where the elliptic and parabolic part of the bidomain equations and the mechanical field are solved sequentially. The comparative simulations designate that the viscoelastic and elastic formulations lead to remarkably different outcomes upon an externally applied electric field to the myocardial tissue. Besides, the achieved framework requires significantly less computational time and memory compared to monolithic schemes without loss of stability for the presented examples.

  19. Incorporating Vibration Test Results for the Advanced Stirling Convertor into the System Dynamic Model

    Science.gov (United States)

    Meer, David W.; Lewandowski, Edward J.

    2010-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin Corporation (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Convertors (ASC) at NASA GRC undergo a vibration test sequence intended to simulate the vibration history that an ASC would experience when used in an ASRG for a space mission. During these tests, a data system collects several performance-related parameters from the convertor under test for health monitoring and analysis. Recently, an additional sensor recorded the slip table position during vibration testing to qualification level. The System Dynamic Model (SDM) integrates Stirling cycle thermodynamics, heat flow, mechanical mass, spring, damper systems, and electrical characteristics of the linear alternator and controller. This Paper presents a comparison of the performance of the ASC when exposed to vibration to that predicted by the SDM when exposed to the same vibration.

  20. Incorporating the gut microbiota into models of human and non-human primate ecology and evolution.

    Science.gov (United States)

    Amato, Katherine R

    2016-01-01

    The mammalian gut is home to a diverse community of microbes. Advances in technology over the past two decades have allowed us to examine this community, the gut microbiota, in more detail, revealing a wide range of influences on host nutrition, health, and behavior. These host-gut microbe interactions appear to shape host plasticity and fitness in a variety of contexts, and therefore represent a key factor missing from existing models of human and non-human primate ecology and evolution. However, current studies of the gut microbiota tend to include limited contextual data or are clinical, making it difficult to directly test broad anthropological hypotheses. Here, I review what is known about the animal gut microbiota and provide examples of how gut microbiota research can be integrated into the study of human and non-human primate ecology and evolution with targeted data collection. Specifically, I examine how the gut microbiota may impact primate diet, energetics, disease resistance, and cognition. While gut microbiota research is proliferating rapidly, especially in the context of humans, there remain important gaps in our understanding of host-gut microbe interactions that will require an anthropological perspective to fill. Likewise, gut microbiota research will be an important tool for filling remaining gaps in anthropological research. © 2016 Wiley Periodicals, Inc.

  1. Searching for the true diet of marine predators: incorporating Bayesian priors into stable isotope mixing models.

    Directory of Open Access Journals (Sweden)

    André Chiaradia

    Full Text Available Reconstructing the diet of top marine predators is of great significance in several key areas of applied ecology, requiring accurate estimation of their true diet. However, from conventional stomach content analysis to recent stable isotope and DNA analyses, no one method is bias or error free. Here, we evaluated the accuracy of recent methods to estimate the actual proportion of a controlled diet fed to a top-predator seabird, the Little penguin (Eudyptula minor. We combined published DNA data of penguins scats with blood plasma δ(15N and δ(13C values to reconstruct the diet of individual penguins fed experimentally. Mismatch between controlled (true ingested diet and dietary estimates obtained through the separately use of stable isotope and DNA data suggested some degree of differences in prey assimilation (stable isotope and digestion rates (DNA analysis. In contrast, combined posterior isotope mixing model with DNA Bayesian priors provided the closest match to the true diet. We provided the first evidence suggesting that the combined use of these complementary techniques may provide better estimates of the actual diet of top marine predators- a powerful tool in applied ecology in the search for the true consumed diet.

  2. Modeling the suppression of boron transient enhanced diffusion in silicon by substitutional carbon incorporation

    Science.gov (United States)

    Ngau, Julie L.; Griffin, Peter B.; Plummer, James D.

    2001-08-01

    Recent work has indicated that the suppression of boron transient enhanced diffusion (TED) in carbon-rich Si is caused by nonequilibrium Si point defect concentrations, specifically the undersaturation of Si self-interstitials, that result from the coupled out-diffusion of carbon interstitials via the kick-out and Frank-Turnbull reactions. This study of boron TED reduction in Si1-x-yGexCy during 750 °C inert anneals has revealed that the use of an additional reaction that further reduces the Si self-interstitial concentration is necessary to describe accurately the time evolved diffusion behavior of boron. In this article, we present a comprehensive model which includes {311} defects, boron-interstitial clusters, a carbon kick-out reaction, a carbon Frank-Turnbull reaction, and a carbon interstitial-carbon substitutional (CiCs) pairing reaction that successfully simulates carbon suppression of boron TED at 750 °C for anneal times ranging from 10 s to 60 min.

  3. Modeling the suppression of boron transient enhanced diffusion in silicon by substitutional carbon incorporation

    International Nuclear Information System (INIS)

    Ngau, Julie L.; Griffin, Peter B.; Plummer, James D.

    2001-01-01

    Recent work has indicated that the suppression of boron transient enhanced diffusion (TED) in carbon-rich Si is caused by nonequilibrium Si point defect concentrations, specifically the undersaturation of Si self-interstitials, that result from the coupled out-diffusion of carbon interstitials via the kick-out and Frank--Turnbull reactions. This study of boron TED reduction in Si 1-x-y Ge x C y during 750 o C inert anneals has revealed that the use of an additional reaction that further reduces the Si self-interstitial concentration is necessary to describe accurately the time evolved diffusion behavior of boron. In this article, we present a comprehensive model which includes {311} defects, boron-interstitial clusters, a carbon kick-out reaction, a carbon Frank--Turnbull reaction, and a carbon interstitial-carbon substitutional (C i C s ) pairing reaction that successfully simulates carbon suppression of boron TED at 750 o C for anneal times ranging from 10 s to 60 min. copyright 2001 American Institute of Physics

  4. Calibrating the BOLD signal during a motor task using an extended fusion model incorporating DOT, BOLD and ASL data

    Science.gov (United States)

    Yücel, Meryem A.; Huppert, Theodore J.; Boas, David A.; Gagnon, Louis

    2012-01-01

    Multimodal imaging improves the accuracy of the localization and the quantification of brain activation when measuring different manifestations of the hemodynamic response associated with cerebral activity. In this study, we incorporated cerebral blood flow (CBF) changes measured with arterial spin labeling (ASL), Diffuse Optical Tomography (DOT) and blood oxygen level-dependent (BOLD) recordings to reconstruct changes in oxy- (ΔHbO2) and deoxyhemoglobin (ΔHbR). Using the Grubb relation between relative changes in CBF and cerebral blood volume (CBV), we incorporated the ASL measurement as a prior to the total hemoglobin concentration change (ΔHbT). We applied this ASL fusion model to both synthetic data and experimental multimodal recordings during a 2-sec finger-tapping task. Our results show that the new approach is very powerful in estimating ΔHbO2 and ΔHbR with high spatial and quantitative accuracy. Moreover, our approach allows the computation of baseline total hemoglobin concentration (HbT0) as well as of the BOLD calibration factor M on a single subject basis. We obtained an average HbT0 of 71 μM, an average M value of 0.18 and an average increase of 13 % in cerebral metabolic rate of oxygen (CMRO2), all of which are in agreement with values previously reported in the literature. Our method yields an independent measurement of M, which provides an alternative measurement to validate the hypercapnic calibration of the BOLD signal. PMID:22546318

  5. Development of Advanced Continuum Models that Incorporate Nanomechanical Deformation into Engineering Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Zimmerman, Jonathan A.; Jones, Reese E.; Templeton, Jeremy Alan; McDowell, David L.; Mayeur, Jason R.; Tucker, Garritt J.; Bammann, Douglas J.; Gao, Huajian

    2008-09-01

    Materials with characteristic structures at nanoscale sizes exhibit significantly different mechani-cal responses from those predicted by conventional, macroscopic continuum theory. For example,nanocrystalline metals display an inverse Hall-Petch effect whereby the strength of the materialdecreases with decreasing grain size. The origin of this effect is believed to be a change in defor-mation mechanisms from dislocation motion across grains and pileup at grain boundaries at mi-croscopic grain sizes to rotation of grains and deformation within grain boundary interface regionsfor nanostructured materials. These rotational defects are represented by the mathematical conceptof disclinations. The ability to capture these effects within continuum theory, thereby connectingnanoscale materials phenomena and macroscale behavior, has eluded the research community.The goal of our project was to develop a consistent theory to model both the evolution ofdisclinations and their kinetics. Additionally, we sought to develop approaches to extract contin-uum mechanical information from nanoscale structure to verify any developed continuum theorythat includes dislocation and disclination behavior. These approaches yield engineering-scale ex-pressions to quantify elastic and inelastic deformation in all varieties of materials, even those thatpossess highly directional bonding within their molecular structures such as liquid crystals, cova-lent ceramics, polymers and biological materials. This level of accuracy is critical for engineeringdesign and thermo-mechanical analysis is performed in micro- and nanosystems. The researchproposed here innovates on how these nanoscale deformation mechanisms should be incorporatedinto a continuum mechanical formulation, and provides the foundation upon which to develop ameans for predicting the performance of advanced engineering materials.4 AcknowledgmentThe authors acknowledge helpful discussions with Farid F. Abraham, Youping Chen, Terry J

  6. Incorporating transportation network modeling tools within transportation economic impact studies of disasters

    Directory of Open Access Journals (Sweden)

    Yi Wen

    2014-08-01

    Full Text Available Transportation system disruption due to a disaster results in "ripple effects" throughout the entire transportation system of a metropolitan region. Many researchers have focused on the economic costs of transportation system disruptions in transportation-related industries, specifïcally within commerce and logistics, in the assessment of the regional economic costs. However, the foundation of an assessment of the regional economic costs of a disaster needs to include the evaluation of consumer surplus in addition to the direct cost for reconstruction of the regional transportation system. The objective of this study is to propose a method to estimate the regional consumer surplus based on indirect economic costs of a disaster on intermodal transportation systems in the context of diverting vehicles and trains. The computational methods used to assess the regional indirect economic costs sustained by the highway and railroad system can utilize readily available state departments of transportation (DOTs and metropolitan planning organizations (MPOs traffic models allowing prioritization of regional recovery plans after a disaster and strengthening of infrastructure before a disaster. Hurricane Katrina is one of the most devastating hurricanes in the history of the United States. Due to the significance of Hurricane Katrina, a case study is presented to evaluate consumer surplus in the Gulf Coast Region of Mississippi. Results from the case study indicate the costs of rerouting and congestion delays in the regional highway system and the rent costs of right-of-way in the regional railroad system are major factors of the indirect costs in the consumer surplus.

  7. A quantitative systems pharmacology approach, incorporating a novel liver model, for predicting pharmacokinetic drug-drug interactions.

    Science.gov (United States)

    Cherkaoui-Rbati, Mohammed H; Paine, Stuart W; Littlewood, Peter; Rauch, Cyril

    2017-01-01

    All pharmaceutical companies are required to assess pharmacokinetic drug-drug interactions (DDIs) of new chemical entities (NCEs) and mathematical prediction helps to select the best NCE candidate with regard to adverse effects resulting from a DDI before any costly clinical studies. Most current models assume that the liver is a homogeneous organ where the majority of the metabolism occurs. However, the circulatory system of the liver has a complex hierarchical geometry which distributes xenobiotics throughout the organ. Nevertheless, the lobule (liver unit), located at the end of each branch, is composed of many sinusoids where the blood flow can vary and therefore creates heterogeneity (e.g. drug concentration, enzyme level). A liver model was constructed by describing the geometry of a lobule, where the blood velocity increases toward the central vein, and by modeling the exchange mechanisms between the blood and hepatocytes. Moreover, the three major DDI mechanisms of metabolic enzymes; competitive inhibition, mechanism based inhibition and induction, were accounted for with an undefined number of drugs and/or enzymes. The liver model was incorporated into a physiological-based pharmacokinetic (PBPK) model and simulations produced, that in turn were compared to ten clinical results. The liver model generated a hierarchy of 5 sinusoidal levels and estimated a blood volume of 283 mL and a cell density of 193 × 106 cells/g in the liver. The overall PBPK model predicted the pharmacokinetics of midazolam and the magnitude of the clinical DDI with perpetrator drug(s) including spatial and temporal enzyme levels changes. The model presented herein may reduce costs and the use of laboratory animals and give the opportunity to explore different clinical scenarios, which reduce the risk of adverse events, prior to costly human clinical studies.

  8. A quantitative systems pharmacology approach, incorporating a novel liver model, for predicting pharmacokinetic drug-drug interactions.

    Directory of Open Access Journals (Sweden)

    Mohammed H Cherkaoui-Rbati

    Full Text Available All pharmaceutical companies are required to assess pharmacokinetic drug-drug interactions (DDIs of new chemical entities (NCEs and mathematical prediction helps to select the best NCE candidate with regard to adverse effects resulting from a DDI before any costly clinical studies. Most current models assume that the liver is a homogeneous organ where the majority of the metabolism occurs. However, the circulatory system of the liver has a complex hierarchical geometry which distributes xenobiotics throughout the organ. Nevertheless, the lobule (liver unit, located at the end of each branch, is composed of many sinusoids where the blood flow can vary and therefore creates heterogeneity (e.g. drug concentration, enzyme level. A liver model was constructed by describing the geometry of a lobule, where the blood velocity increases toward the central vein, and by modeling the exchange mechanisms between the blood and hepatocytes. Moreover, the three major DDI mechanisms of metabolic enzymes; competitive inhibition, mechanism based inhibition and induction, were accounted for with an undefined number of drugs and/or enzymes. The liver model was incorporated into a physiological-based pharmacokinetic (PBPK model and simulations produced, that in turn were compared to ten clinical results. The liver model generated a hierarchy of 5 sinusoidal levels and estimated a blood volume of 283 mL and a cell density of 193 × 106 cells/g in the liver. The overall PBPK model predicted the pharmacokinetics of midazolam and the magnitude of the clinical DDI with perpetrator drug(s including spatial and temporal enzyme levels changes. The model presented herein may reduce costs and the use of laboratory animals and give the opportunity to explore different clinical scenarios, which reduce the risk of adverse events, prior to costly human clinical studies.

  9. Incorporating the Impacts of Small Scale Rock Heterogeneity into Models of Flow and Trapping in Target UK CO2 Storage Systems

    Science.gov (United States)

    Jackson, S. J.; Reynolds, C.; Krevor, S. C.

    2017-12-01

    Predictions of the flow behaviour and storage capacity of CO2 in subsurface reservoirs are dependent on accurate modelling of multiphase flow and trapping. A number of studies have shown that small scale rock heterogeneities have a significant impact on CO2flow propagating to larger scales. The need to simulate flow in heterogeneous reservoir systems has led to the development of numerical upscaling techniques which are widely used in industry. Less well understood, however, is the best approach for incorporating laboratory characterisations of small scale heterogeneities into models. At small scales, heterogeneity in the capillary pressure characteristic function becomes significant. We present a digital rock workflow that combines core flood experiments with numerical simulations to characterise sub-core scale capillary pressure heterogeneities within rock cores from several target UK storage reservoirs - the Bunter, Captain and Ormskirk sandstone formations. Measured intrinsic properties (permeability, capillary pressure, relative permeability) and 3D saturations maps from steady-state core flood experiments were the primary inputs to construct a 3D digital rock model in CMG IMEX. We used vertical end-point scaling to iteratively update the voxel by voxel capillary pressure curves from the average MICP curve; with each iteration more closely predicting the experimental saturations and pressure drops. Once characterised, the digital rock cores were used to predict equivalent flow functions, such as relative permeability and residual trapping, across the range of flow conditions estimated to prevail in the CO2 storage reservoirs. In the case of the Captain sandstone, rock cores were characterised across an entire 100m vertical transect of the reservoir. This allowed analysis of the upscaled impact of small scale heterogeneity on flow and trapping. Figure 1 shows the varying degree to which heterogeneity impacted flow depending on the capillary number in the

  10. Incorporating Sentinel-2-like remote sensing products in the hydrometeorological modelling over an agricultural area in south west France

    Science.gov (United States)

    Rivalland, Vincent; Gascoin, Simon; Etchanchu, Jordi; Coustau, Mathieu; Cros, Jérôme; Tallec, Tiphaine

    2016-04-01

    The Sentinel-2 mission will enable to monitor the land cover and the vegetation phenology at high-resolution (HR) every 5 days. However, current Land Surface Models (LSM) typically use land cover and vegetation parameters derived from previous low to mid resolution satellite missions. Here we studied the effect of introducing Sentinel-2-like data in the simulation of the land surface energy and water fluxes in a region dominated by cropland. Simulations were performed with the ISBA-SURFEX LSM, which is used in the operational hydrometeorological chain of Meteo-France for hydrological forecasts and drought monitoring. By default, SURFEX vegetation land surface parameters and temporal evolution are from the ECOCLIMAP II European database mostly derived from MODIS products at 1 km resolution. The model was applied to an experimental area of 30 km by 30 km in south west France. In this area the resolution of ECOCLIMAP is coarser than the typical size of a crop field. This means that several crop types can be mixed in a pixel. In addition ECOCLIMAP provides a climatology of the vegetation phenology and thus does not account for the interannual effects of the climate and land management on the crop growth. In this work, we used a series of 26 Formosat-2 images at 8-m resolution acquired in 2006. From this dataset, we derived a land cover map and a leaf area index map (LAI) at each date, which were substituted to the ECOCLIMAP land cover map and the LAI maps. The model output water and energy fluxes were compared to a standard simulation using ECOCLIMAP only and to in situ measurements of soil moisture, latent and sensible heat fluxes. The results show that the introduction of the HR products improved the timing of the evapotranspiration. The impact was the most visible on the crops having a growing season in summer (maize, sunflower), because the growth period is more sensitive to the climate.

  11. Towards a Predictive Thermodynamic Model of Oxidation States of Uranium Incorporated in Fe (hydr) oxides

    Energy Technology Data Exchange (ETDEWEB)

    Bagus, Paul S. [Univ. of North Texas, Denton, TX (United States)

    2013-01-01

    -Level Excited States: Consequences For X-Ray Absorption Spectroscopy”, J. Elec. Spectros. and Related Phenom., 200, 174 (2015) describes our first application of these methods. As well as applications to problems and materials of direct interest for our PNNL colleagues, we have pursued applications of fundamental theoretical significance for the analysis and interpretation of XPS and XAS spectra. These studies are important for the development of the fields of core-level spectroscopies as well as to advance our capabilities for applications of interest to our PNNL colleagues. An excellent example is our study of the surface core-level shifts, SCLS, for the surface and bulk atoms of an oxide that provides a new approach to understanding how the surface electronic of oxides differs from that in the bulk of the material. This work has the potential to lead to a new key to understanding the reactivity of oxide surfaces. Our theoretical studies use cluster models with finite numbers of atoms to describe the properties of condensed phases and crystals. This approach has allowed us to focus on the local atomistic, chemical interactions. For these clusters, we obtain orbitals and spinors through the solution of the Hartree-Fock, HF, and the fully relativistic Dirac HF equations. These orbitals are used to form configuration mixing wavefunctions which treat the many-body effects responsible for the open shell angular momentum coupling and for the satellites of the core-level spectra. Our efforts have been in two complementary directions. As well as the applications described above, we have placed major emphasis on the enhancement and extension of our theoretical and computational capabilities so that we can treat complex systems with a greater range of many-body effects. Noteworthy accomplishments in terms of method development and enhancement have included: (1) An improvement in our treatment of the large matrices that must be handled when many-body effects are treated. (2

  12. A New Paradigm For Modeling Fault Zone Inelasticity: A Multiscale Continuum Framework Incorporating Spontaneous Localization and Grain Fragmentation.

    Science.gov (United States)

    Elbanna, A. E.

    2015-12-01

    The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.

  13. A bi-level integrated generation-transmission planning model incorporating the impacts of demand response by operation simulation

    International Nuclear Information System (INIS)

    Zhang, Ning; Hu, Zhaoguang; Springer, Cecilia; Li, Yanning; Shen, Bo

    2016-01-01

    Highlights: • We put forward a novel bi-level integrated power system planning model. • Generation expansion planning and transmission expansion planning are combined. • The effects of two sorts of demand response in reducing peak load are considered. • Operation simulation is conducted to reflect the actual effects of demand response. • The interactions between the two levels can guarantee a reasonably optimal result. - Abstract: If all the resources in power supply side, transmission part, and power demand side are considered together, the optimal expansion scheme from the perspective of the whole system can be achieved. In this paper, generation expansion planning and transmission expansion planning are combined into one model. Moreover, the effects of demand response in reducing peak load are taken into account in the planning model, which can cut back the generation expansion capacity and transmission expansion capacity. Existing approaches to considering demand response for planning tend to overestimate the impacts of demand response on peak load reduction. These approaches usually focus on power reduction at the moment of peak load without considering the situations in which load demand at another moment may unexpectedly become the new peak load due to demand response. These situations are analyzed in this paper. Accordingly, a novel approach to incorporating demand response in a planning model is proposed. A modified unit commitment model with demand response is utilized. The planning model is thereby a bi-level model with interactions between generation-transmission expansion planning and operation simulation to reflect the actual effects of demand response and find the reasonably optimal planning result.

  14. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  15. Modelling sediment dynamics due to hillslope-river interactions : incorporating fluvial behaviour in landscape evolution model LAPSUS

    NARCIS (Netherlands)

    Baartman, Jantiene E. M.; van Gorp, Wouter; Temme, Arnaud J. A. M.; Schoorl, Jeroen M.

    Landscape evolution models (LEMs) simulate the three-dimensional development of landscapes over time. Different LEMs have different foci, e.g. erosional behaviour, river dynamics, the fluvial domain, hillslopes or a combination. LEM LAPSUS is a relatively simple cellular model operating on

  16. An improved analytical model of 4H-SiC MESFET incorporating bulk and interface trapping effects

    Science.gov (United States)

    Hema Lata Rao, M.; Narasimha Murty, N. V. L.

    2015-01-01

    An improved analytical model for the current—voltage (I-V) characteristics of the 4H-SiC metal semiconductor field effect transistor (MESFET) on a high purity semi-insulating (HPSI) substrate with trapping and thermal effects is presented. The 4H-SiC MESFET structure includes a stack of HPSI substrates and a uniformly doped channel layer. The trapping effects include both the effect of multiple deep-level traps in the substrate and surface traps between the gate to source/drain. The self-heating effects are also incorporated to obtain the accurate and realistic nature of the analytical model. The importance of the proposed model is emphasised through the inclusion of the recent and exact nature of the traps in the 4H-SiC HPSI substrate responsible for substrate compensation. The analytical model is used to exhibit DC I-V characteristics of the device with and without trapping and thermal effects. From the results, the current degradation is observed due to the surface and substrate trapping effects and the negative conductance introduced by the self-heating effect at a high drain voltage. The calculated results are compared with reported experimental and two-dimensional simulations (Silvaco®-TCAD). The proposed model also illustrates the effectiveness of the gate—source distance scaling effect compared to the gate—drain scaling effect in optimizing 4H-SiC MESFET performance. Results demonstrate that the proposed I-V model of 4H-SiC MESFET is suitable for realizing SiC based monolithic circuits (MMICs) on HPSI substrates.

  17. An improved analytical model of 4H-SiC MESFET incorporating bulk and interface trapping effects

    International Nuclear Information System (INIS)

    Rao, M. Hema Lata; Murty, N. V. L. Narasimha

    2015-01-01

    An improved analytical model for the current—voltage (I–V) characteristics of the 4H-SiC metal semiconductor field effect transistor (MESFET) on a high purity semi-insulating (HPSI) substrate with trapping and thermal effects is presented. The 4H-SiC MESFET structure includes a stack of HPSI substrates and a uniformly doped channel layer. The trapping effects include both the effect of multiple deep-level traps in the substrate and surface traps between the gate to source/drain. The self-heating effects are also incorporated to obtain the accurate and realistic nature of the analytical model. The importance of the proposed model is emphasised through the inclusion of the recent and exact nature of the traps in the 4H-SiC HPSI substrate responsible for substrate compensation. The analytical model is used to exhibit DC I–V characteristics of the device with and without trapping and thermal effects. From the results, the current degradation is observed due to the surface and substrate trapping effects and the negative conductance introduced by the self-heating effect at a high drain voltage. The calculated results are compared with reported experimental and two-dimensional simulations (Silvaco®-TCAD). The proposed model also illustrates the effectiveness of the gate—source distance scaling effect compared to the gate—drain scaling effect in optimizing 4H-SiC MESFET performance. Results demonstrate that the proposed I–V model of 4H-SiC MESFET is suitable for realizing SiC based monolithic circuits (MMICs) on HPSI substrates. (semiconductor devices)

  18. Incorporation of velocity-dependent restitution coefficient and particle surface friction into kinetic theory for modeling granular flow cooling.

    Science.gov (United States)

    Duan, Yifei; Feng, Zhi-Gang

    2017-12-01

    Kinetic theory (KT) has been successfully used to model rapid granular flows in which particle interactions are frictionless and near elastic. However, it fails when particle interactions become frictional and inelastic. For example, the KT is not able to accurately predict the free cooling process of a vibrated granular medium that consists of inelastic frictional particles under microgravity. The main reason that the classical KT fails to model these flows is due to its inability to account for the particle surface friction and its inelastic behavior, which are the two most important factors that need be considered in modeling collisional granular flows. In this study, we have modified the KT model that is able to incorporate these two factors. The inelasticity of a particle is considered by establishing a velocity-dependent expression for the restitution coefficient based on many experimental studies found in the literature, and the particle friction effect is included by using a tangential restitution coefficient that is related to the particle friction coefficient. Theoretical predictions of the free cooling process by the classical KT and the improved KT are compared with the experimental results from a study conducted on an airplane undergoing parabolic flights without the influence of gravity [Y. Grasselli, G. Bossis, and G. Goutallier, Europhys. Lett. 86, 60007 (2009)10.1209/0295-5075/86/60007]. Our results show that both the velocity-dependent restitution coefficient and the particle surface friction are important in predicting the free cooling process of granular flows; the modified KT model that integrates these two factors is able to improve the simulation results and leads to better agreement with the experimental results.

  19. Incorporating a prediction of postgrazing herbage mass into a whole-farm model for pasture-based dairy systems.

    Science.gov (United States)

    Gregorini, P; Galli, J; Romera, A J; Levy, G; Macdonald, K A; Fernandez, H H; Beukes, P C

    2014-07-01

    The DairyNZ whole-farm model (WFM; DairyNZ, Hamilton, New Zealand) consists of a framework that links component models for animal, pastures, crops, and soils. The model was developed to assist with analysis and design of pasture-based farm systems. New (this work) and revised (e.g., cow, pasture, crops) component models can be added to the WFM, keeping the model flexible and up to date. Nevertheless, the WFM does not account for plant-animal relationships determining herbage-depletion dynamics. The user has to preset the maximum allowable level of herbage depletion [i.e., postgrazing herbage mass (residuals)] throughout the year. Because residuals have a direct effect on herbage regrowth, the WFM in its current form does not dynamically simulate the effect of grazing pressure on herbage depletion and consequent effect on herbage regrowth. The management of grazing pressure is a key component of pasture-based dairy systems. Thus, the main objective of the present work was to develop a new version of the WFM able to predict residuals, and thereby simulate related effects of grazing pressure dynamically at the farm scale. This objective was accomplished by incorporating a new component model into the WFM. This model represents plant-animal relationships, for example sward structure and herbage intake rate, and resulting level of herbage depletion. The sensitivity of the new version of the WFM was evaluated and then the new WFM was tested against an experimental data set previously used to evaluate the WFM and to illustrate the adequacy and improvement of the model development. Key outputs variables of the new version pertinent to this work (milk production, herbage dry matter intake, intake rate, harvesting efficiency, and residuals) responded acceptably to a range of input variables. The relative prediction errors for monthly and mean annual residual predictions were 20 and 5%, respectively. Monthly predictions of residuals had a line bias (1.5%), with a proportion

  20. Comparison in the calculation of committed effective dose using the ICRP 30 and ICRP 60 models for a repeated incorporation by inhalation of I-125

    International Nuclear Information System (INIS)

    Carreno P, A.L.; Cortes C, A.; Alonso V, G.; Serrano P, F.

    2005-01-01

    Presently work, a comparison in the calculation of committed effective dose using the models of the ICRP 30 and those of the ICRP 60 for the analysis of internal dose due to repeated incorporation of I-125 is shown. The estimations of incorporated activity are obtained starting from the proportionate data for an exercise of inter comparison, with which it should be determined the internal dose later on. For to estimate the initial activity incorporated by repeated dose was assumed that this it was given through of multiple individual incorporations which happened in the middle points of the monitoring periods. The results using the models of the ICRP 30 and of the ICRP 60 are compared and the causes of the differences are analyzed. (Author)

  1. A modified Wright-Fisher model that incorporates Ne: A variant of the standard model with increased biological realism and reduced computational complexity.

    Science.gov (United States)

    Zhao, Lei; Gossmann, Toni I; Waxman, David

    2016-03-21

    The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have

  2. Stochastic modeling of phosphorus transport in the Three Gorges Reservoir by incorporating variability associated with the phosphorus partition coefficient

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lei; Fang, Hongwei; Xu, Xingya; He, Guojian; Zhang, Xuesong; Reible, Danny

    2017-08-01

    Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). We formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions, to obtain statistical descriptions of the P concentration and retention in the TGR. The correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. This study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.

  3. Incorporating a vascular term into a reference region model for the analysis of DCE-MRI data: a simulation study

    International Nuclear Information System (INIS)

    Faranesh, A Z; Yankeelov, T E

    2008-01-01

    A vascular term was incorporated into a reference region (RR) model analysis of DCE-MRI data, and its effect on the accuracy of the model in estimating tissue kinetic parameters in a tissue of interest (TOI) was systematically investigated through computer simulations. Errors in the TOI volume transfer constant (K trans,TOI ) and TOI extravascular extracellular volume (v e,TOI ) that result when the fractional plasma volume (v p ) was included in (1) neither region, (2) TOI only (3) both regions were investigated. For nominal values of tumor kinetic parameters (v e,TOI = 0.40 and K trans,TOI = 0.25 min -1 ), if the vascular term was included in neither region or the TOI only, K trans,TOI error was within 20% for 0.03 p,TOI e,TOI error was within 20% for the range of v p,TOI studied (0.01-0.10). The effects of temporal resolution were shown to be complex, and in some cases errors increased with increasing temporal resolution

  4. Incorporating adolescent females' perceptions of their partners' attitudes toward condoms into a model of female adolescent condom use.

    Science.gov (United States)

    Hogben, Matthew; Liddon, Nicole; Pierce, Antonya; Sawyer, Mary; Papp, John R; Black, Carolyn M; Koumans, Emilia H

    2006-11-01

    The highest rates of sexually transmitted infections in the U.S. occur among adolescent females. One prevention strategy promoted for sexually active adolescents is condom use: therefore, influences on correct and consistent condom use are worth examining. Because interventions and observational research into predicting and increasing condom use have yielded mixed results, we hypothesized that a theoretically driven model incorporating female adolescents' perceptions about partner sentiments along with their own perceptions, intentions, and behaviours would improve condom use predictions. We also measured condom use errors and consistency for a more precise estimate of effective use than is common in the literature. In three structural equation models tested on a sample of 519 female adolescents, we found that intentions were associated with both correct and consistent condom use; that females' expectancy beliefs about condom use were associated with intentions; and that females' expectancy beliefs about partners' sentiments reduced the impact of their expectancy beliefs about condom use. The implications of these relations upon condom use correctness and consistency are discussed with respect to informing interventions, among other future research.

  5. Incorporation of diffusion-weighted magnetic resonance imaging data into a simple mathematical model of tumor growth

    International Nuclear Information System (INIS)

    Atuegwu, N C; Colvin, D C; Loveless, M E; Gore, J C; Yankeelov, T E; Xu, L

    2012-01-01

    We build on previous work to show how serial diffusion-weighted MRI (DW-MRI) data can be used to estimate proliferation rates in a rat model of brain cancer. Thirteen rats were inoculated intracranially with 9L tumor cells; eight rats were treated with the chemotherapeutic drug 1,3-bis(2-chloroethyl)-1-nitrosourea and five rats were untreated controls. All animals underwent DW-MRI immediately before, one day and three days after treatment. Values of the apparent diffusion coefficient (ADC) were calculated from the DW-MRI data and then used to estimate the number of cells in each voxel and also for whole tumor regions of interest. The data from the first two imaging time points were then used to estimate the proliferation rate of each tumor. The proliferation rates were used to predict the number of tumor cells at day three, and this was correlated with the corresponding experimental data. The voxel-by-voxel analysis yielded Pearson's correlation coefficients ranging from −0.06 to 0.65, whereas the region of interest analysis provided Pearson's and concordance correlation coefficients of 0.88 and 0.80, respectively. Additionally, the ratio of positive to negative proliferation values was used to separate the treated and control animals (p <0.05) at an earlier point than the mean ADC values. These results further illustrate how quantitative measurements of tumor state obtained non-invasively by imaging can be incorporated into mathematical models that predict tumor growth. (paper)

  6. Incorporating a Generic Model of Subcutaneous Insulin Absorption into the AIDA v4 Diabetes Simulator 3. Early Plasma Insulin Determinations

    Science.gov (United States)

    Lehmann, Eldon D.; Tarín, Cristina; Bondia, Jorge; Teufel, Edgar; Deutsch, Tibor

    2009-01-01

    Introduction AIDA is an interactive educational diabetes simulator that has been available without charge via the Internet for over 12 years. Recent articles have described the incorporation of a novel generic model of insulin absorption into AIDA as a way of enhancing its capabilities. The basic model components to be integrated have been overviewed, with the aim being to provide simulations of regimens utilizing insulin analogues, as well as insulin doses greater than 40 IU (the current upper limit within the latest release of AIDA [v4.3a]). Some preliminary calculated insulin absorption results have also recently been described. Methods This article presents the first simulated plasma insulin profiles from the integration of the generic subcutaneous insulin absorption model, and the currently implemented model in AIDA for insulin disposition. Insulin absorption has been described by the physiologically based model of Tarín and colleagues. A single compartment modeling approach has been used to specify how absorbed insulin is distributed in, and eliminated from, the human body. To enable a numerical solution of the absorption model, a spherical subcutaneous depot for the injected insulin dose has been assumed and spatially discretized into shell compartments with homogeneous concentrations, having as its center the injection site. The number of these compartments will depend on the dose and type of insulin. Insulin inflow arises as the sum of contributions to the different shells. For this report the first bench testing of plasma insulin determinations has been done. Results Simulated plasma insulin profiles are provided for currently available insulin preparations, including a rapidly acting insulin analogue (e.g., lispro/Humalog or aspart/Novolog), a short-acting (regular) insulin preparation (e.g., Actrapid), intermediate-acting insulins (both Semilente and neutral protamine Hagedorn types), and a very long-acting insulin analogue (e.g., glargine/Lantus), as

  7. Evaluating the Capacity of Global CO2 Flux and Atmospheric Transport Models to Incorporate New Satellite Observations

    Science.gov (United States)

    Kawa, S. R.; Collatz, G. J.; Erickson, D. J.; Denning, A. S.; Wofsy, S. C.; Andrews, A. E.

    2007-01-01

    As we enter the new era of satellite remote sensing for CO2 and other carbon cyclerelated quantities, advanced modeling and analysis capabilities are required to fully capitalize on the new observations. Model estimates of CO2 surface flux and atmospheric transport are required for initial constraints on inverse analyses, to connect atmospheric observations to the location of surface sources and sinks, and ultimately for future projections of carbon-climate interactions. For application to current, planned, and future remotely sensed CO2 data, it is desirable that these models are accurate and unbiased at time scales from less than daily to multi-annual and at spatial scales from several kilometers or finer to global. Here we focus on simulated CO2 fluxes from terrestrial vegetation and atmospheric transport mutually constrained by analyzed meteorological fields from the Goddard Modeling and Assimilation Office for the period 1998 through 2006. Use of assimilated meteorological data enables direct model comparison to observations across a wide range of scales of variability. The biospheric fluxes are produced by the CASA model at lxi degrees on a monthly mean basis, modulated hourly with analyzed temperature and sunlight. Both physiological and biomass burning fluxes are derived using satellite observations of vegetation, burned area (as in GFED-2), and analyzed meteorology. For the purposes of comparison to CO2 data, fossil fuel and ocean fluxes are also included in the transport simulations. In this presentation we evaluate the model's ability to simulate CO2 flux and mixing ratio variability in comparison to in situ observations at sites in Northern mid latitudes and the continental tropics. The influence of key process representations is inferred. We find that the model can resolve much of the hourly to synoptic variability in the observations, although there are limits imposed by vertical resolution of boundary layer processes. The seasonal cycle and its

  8. Model-Based Evaluation of Higher Doses of Rifampin Using a Semimechanistic Model Incorporating Autoinduction and Saturation of Hepatic Extraction.

    Science.gov (United States)

    Chirehwa, Maxwell T; Rustomjee, Roxana; Mthiyane, Thuli; Onyebujoh, Philip; Smith, Peter; McIlleron, Helen; Denti, Paolo

    2016-01-01

    Rifampin is a key sterilizing drug in the treatment of tuberculosis (TB). It induces its own metabolism, but neither the onset nor the extent of autoinduction has been adequately described. Currently, the World Health Organization recommends a rifampin dose of 8 to 12 mg/kg of body weight, which is believed to be suboptimal, and higher doses may potentially improve treatment outcomes. However, a nonlinear increase in exposure may be observed because of saturation of hepatic extraction and hence this should be taken into consideration when a dose increase is implemented. Intensive pharmacokinetic (PK) data from 61 HIV-TB-coinfected patients in South Africa were collected at four visits, on days 1, 8, 15, and 29, after initiation of treatment. Data were analyzed by population nonlinear mixed-effects modeling. Rifampin PKs were best described by using a transit compartment absorption and a well-stirred liver model with saturation of hepatic extraction, including a first-pass effect. Autoinduction was characterized by using an exponential-maturation model: hepatic clearance almost doubled from the baseline to steady state, with a half-life of around 4.5 days. The model predicts that increases in the dose of rifampin result in more-than-linear drug exposure increases as measured by the 24-h area under the concentration-time curve. Simulations with doses of up to 35 mg/kg produced results closely in line with those of clinical trials. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  9. Recent Progresses in Incorporating Human Land-Water Management into Global Land Surface Models Toward Their Integration into Earth System Models

    Science.gov (United States)

    Pokhrel, Yadu N.; Hanasaki, Naota; Wada, Yoshihide; Kim, Hyungjun

    2016-01-01

    The global water cycle has been profoundly affected by human land-water management. As the changes in the water cycle on land can affect the functioning of a wide range of biophysical and biogeochemical processes of the Earth system, it is essential to represent human land-water management in Earth system models (ESMs). During the recent past, noteworthy progress has been made in large-scale modeling of human impacts on the water cycle but sufficient advancements have not yet been made in integrating the newly developed schemes into ESMs. This study reviews the progresses made in incorporating human factors in large-scale hydrological models and their integration into ESMs. The study focuses primarily on the recent advancements and existing challenges in incorporating human impacts in global land surface models (LSMs) as a way forward to the development of ESMs with humans as integral components, but a brief review of global hydrological models (GHMs) is also provided. The study begins with the general overview of human impacts on the water cycle. Then, the algorithms currently employed to represent irrigation, reservoir operation, and groundwater pumping are discussed. Next, methodological deficiencies in current modeling approaches and existing challenges are identified. Furthermore, light is shed on the sources of uncertainties associated with model parameterizations, grid resolution, and datasets used for forcing and validation. Finally, representing human land-water management in LSMs is highlighted as an important research direction toward developing integrated models using ESM frameworks for the holistic study of human-water interactions within the Earths system.

  10. Simulating land-use changes by incorporating spatial autocorrelation and self-organization in CLUE-S modeling: a case study in Zengcheng District, Guangzhou, China

    Science.gov (United States)

    Mei, Zhixiong; Wu, Hao; Li, Shiyun

    2018-06-01

    The Conversion of Land Use and its Effects at Small regional extent (CLUE-S), which is a widely used model for land-use simulation, utilizes logistic regression to estimate the relationships between land use and its drivers, and thus, predict land-use change probabilities. However, logistic regression disregards possible spatial autocorrelation and self-organization in land-use data. Autologistic regression can depict spatial autocorrelation but cannot address self-organization, while logistic regression by considering only self-organization (NElogistic regression) fails to capture spatial autocorrelation. Therefore, this study developed a regression (NE-autologistic regression) method, which incorporated both spatial autocorrelation and self-organization, to improve CLUE-S. The Zengcheng District of Guangzhou, China was selected as the study area. The land-use data of 2001, 2005, and 2009, as well as 10 typical driving factors, were used to validate the proposed regression method and the improved CLUE-S model. Then, three future land-use scenarios in 2020: the natural growth scenario, ecological protection scenario, and economic development scenario, were simulated using the improved model. Validation results showed that NE-autologistic regression performed better than logistic regression, autologistic regression, and NE-logistic regression in predicting land-use change probabilities. The spatial allocation accuracy and kappa values of NE-autologistic-CLUE-S were higher than those of logistic-CLUE-S, autologistic-CLUE-S, and NE-logistic-CLUE-S for the simulations of two periods, 2001-2009 and 2005-2009, which proved that the improved CLUE-S model achieved the best simulation and was thereby effective to a certain extent. The scenario simulation results indicated that under all three scenarios, traffic land and residential/industrial land would increase, whereas arable land and unused land would decrease during 2009-2020. Apparent differences also existed in the

  11. Incorporating soil variability in continental soil water modelling: a trade-off between data availability and model complexity

    Science.gov (United States)

    Peeters, L.; Crosbie, R. S.; Doble, R.; van Dijk, A. I. J. M.

    2012-04-01

    Developing a continental land surface model implies finding a balance between the complexity in representing the system processes and the availability of reliable data to drive, parameterise and calibrate the model. While a high level of process understanding at plot or catchment scales may warrant a complex model, such data is not available at the continental scale. This data sparsity is especially an issue for the Australian Water Resources Assessment system, AWRA-L, a land-surface model designed to estimate the components of the water balance for the Australian continent. This study focuses on the conceptualization and parametrization of the soil drainage process in AWRA-L. Traditionally soil drainage is simulated with Richards' equation, which is highly non-linear. As general analytic solutions are not available, this equation is usually solved numerically. In AWRA-L however, we introduce a simpler function based on simulation experiments that solve Richards' equation. In the simplified function soil drainage rate, the ratio of drainage (D) over storage (S), decreases exponentially with relative water content. This function is controlled by three parameters, the soil water storage at field capacity (SFC), the drainage fraction at field capacity (KFC) and a drainage function exponent (β). [ ] D- -S- S = KF C exp - β (1 - SFC ) To obtain spatially variable estimates of these three parameters, the Atlas of Australian Soils is used, which lists soil hydraulic properties for each soil profile type. For each soil profile type in the Atlas, 10 days of draining an initially fully saturated, freely draining soil is simulated using HYDRUS-1D. With field capacity defined as the volume of water in the soil after 1 day, the remaining parameters can be obtained by fitting the AWRA-L soil drainage function to the HYDRUS-1D results. This model conceptualisation fully exploits the data available in the Atlas of Australian Soils, without the need to solve the non

  12. An integrated stochastic multi-regional long-term energy planning model incorporating autonomous power systems and demand response

    International Nuclear Information System (INIS)

    Koltsaklis, Nikolaos E.; Liu, Pei; Georgiadis, Michael C.

    2015-01-01

    The power sector faces a rapid transformation worldwide from a dominant fossil-fueled towards a low carbon electricity generation mix. Renewable energy technologies (RES) are steadily becoming a greater part of the global energy mix, in particular in regions that have put in place policies and measures to promote their utilization. This paper presents an optimization-based approach to address the generation expansion planning (GEP) problem of a large-scale, central power system in a highly uncertain and volatile electricity industry environment. A multi-regional, multi-period linear mixed-integer linear programming (MILP) model is presented, combining optimization techniques with a Monte Carlo (MCA) method and demand response concepts. The optimization goal concerns the minimization of the total discounted cost by determining optimal power capacity additions per time interval and region, and the power generation mix per technology and time period. The model is evaluated on the Greek power system (GPS), taking also into consideration the scheduled interconnection of the mainland power system with those of selected autonomous islands (Cyclades and Crete), and aims at providing full insight into the composition of the long-term energy roadmap at a national level. - Highlights: • A spatial, multi-period, long-term generation expansion planning model is presented. • A Monte-Carlo method along with a demand response mechanism are incorporated. • Autonomous power systems interconnection is considered. • Electricity and CO 2 emission trade are taken into account. • Lignite, natural gas and wind power comprise the dominant power technologies

  13. Introducing a model incorporating early integration of specialist palliative care: A qualitative research study of staff's perspectives.

    Science.gov (United States)

    Michael, Natasha; O'Callaghan, Clare; Brooker, Joanne E; Walker, Helen; Hiscock, Richard; Phillips, David

    2016-03-01

    Palliative care has evolved to encompass early integration, with evaluation of patient and organisational outcomes. However, little is known of staff's experiences and adaptations when change occurs within palliative care services. To explore staff experiences of a transition from a service predominantly focused on end-of-life care to a specialist service encompassing early integration. Qualitative research incorporating interviews, focus groups and anonymous semi-structured questionnaires. Data were analysed using a comparative approach. Service activity data were also aggregated. A total of 32 medical, nursing, allied health and administrative staff serving a 22-bed palliative care unit and community palliative service, within a large health service. Patients cared for within the new model were significantly more likely to be discharged home (7.9% increase, p = 0.003) and less likely to die in the inpatient unit (10.4% decrease, p management was considered valuable, nurses particularly found additional skill expectations challenging, and perceived patients' acute care needs as detracting from emotional and end-of-life care demands. Staff views varied on whether they regarded the new model's faster-paced work-life as consistent with fundamental palliative care principles. Less certainty about care goals, needing to prioritise care tasks, reduced shared support rituals and other losses could intensify stress, leading staff to develop personalised coping strategies. Services introducing and researching innovative models of palliative care need to ensure adequate preparation, maintenance of holistic care principles in faster work-paced contexts and assist staff dealing with demands associated with caring for patients at different stages of illness trajectories. © The Author(s) 2015.

  14. A new heat flux model for the Antarctic Peninsula incorporating spatially variable upper crustal radiogenic heat production

    Science.gov (United States)

    Burton-Johnson, A.; Halpin, J.; Whittaker, J. M.; Graham, F. S.; Watson, S. J.

    2017-12-01

    We present recently published findings (Burton-Johnson et al., 2017) on the variability of Antarctic sub-glacial heat flux and the impact from upper crustal geology. Our new method reveals that the upper crust contributes up to 70% of the Antarctic Peninsula's subglacial heat flux, and that heat flux values are more variable at smaller spatial resolutions than geophysical methods can resolve. Results indicate a higher heat flux on the east and south of the Peninsula (mean 81 mWm-2) where silicic rocks predominate, than on the west and north (mean 67 mWm-2) where volcanic arc and quartzose sediments are dominant. Whilst the data supports the contribution of HPE-enriched granitic rocks to high heat flux values, sedimentary rocks can be of comparative importance dependent on their provenance and petrography. Models of subglacial heat flux must utilize a heterogeneous upper crust with variable radioactive heat production if they are to accurately predict basal conditions of the ice sheet. Our new methodology and dataset facilitate improved numerical model simulations of ice sheet dynamics. The most significant challenge faced remains accurate determination of crustal structure, particularly the depths of the HPE-enriched sedimentary basins and the sub-glacial geology away from exposed outcrops. Continuing research (particularly detailed geophysical interpretation) will better constrain these unknowns and the effect of upper crustal geology on the Antarctic ice sheet. Burton-Johnson, A., Halpin, J.A., Whittaker, J.M., Graham, F.S., and Watson, S.J., 2017, A new heat flux model for the Antarctic Peninsula incorporating spatially variable upper crustal radiogenic heat production: Geophysical Research Letters, v. 44, doi: 10.1002/2017GL073596.

  15. Hydrological simulation in a basin of typical tropical climate and soil using the SWAT model part I: Calibration and validation tests

    Directory of Open Access Journals (Sweden)

    Donizete dos R. Pereira

    2016-09-01

    New hydrological insights: The SWAT model was qualified for simulating the Pomba River sub-basin in the sites where rainfall representation was reasonable to good. The model can be used in the simulation of maximum, average and minimum annual daily streamflow based on the paired t-test, contributing with the water resources management of region, although the model still needs to be improved, mainly in the representativeness of rainfall, to give better estimates of extreme values.

  16. Enhancing the performance of model-based elastography by incorporating additional a priori information in the modulus image reconstruction process

    International Nuclear Information System (INIS)

    Doyley, Marvin M; Srinivasan, Seshadri; Dimidenko, Eugene; Soni, Nirmal; Ophir, Jonathan

    2006-01-01

    Model-based elastography is fraught with problems owing to the ill-posed nature of the inverse elasticity problem. To overcome this limitation, we have recently developed a novel inversion scheme that incorporates a priori information concerning the mechanical properties of the underlying tissue structures, and the variance incurred during displacement estimation in the modulus image reconstruction process. The information was procured by employing standard strain imaging methodology, and introduced in the reconstruction process through the generalized Tikhonov approach. In this paper, we report the results of experiments conducted on gelatin phantoms to evaluate the performance of modulus elastograms computed with the generalized Tikhonov (GTK) estimation criterion relative to those computed by employing the un-weighted least-squares estimation criterion, the weighted least-squares estimation criterion and the standard Tikhonov method (i.e., the generalized Tikhonov method with no modulus prior). The results indicate that modulus elastograms computed with the generalized Tikhonov approach had superior elastographic contrast discrimination and contrast recovery. In addition, image reconstruction was more resilient to structural decorrelation noise when additional constraints were imposed on the reconstruction process through the GTK method

  17. Total motion generated in the unstable thoracolumbar spine during management of the typical trauma patient: a comparison of methods in a cadaver model.

    Science.gov (United States)

    Prasarn, Mark L; Zhou, Haitao; Dubose, Dewayne; Rossi, Gianluca Del; Conrad, Bryan P; Horodyski, Marybeth; Rechtine, Glenn R

    2012-05-01

    The proper prehospital and inpatient management of patients with unstable spinal injuries is critical for prevention of secondary neurological compromise. The authors sought to analyze the amount of motion generated in the unstable thoracolumbar spine during various maneuvers and transfers that a trauma patient would typically be subjected to prior to definitive fixation. Five fresh cadavers with surgically created unstable L-1 burst fractures were tested. The amount of angular motion between the T-12 and L-2 vertebral segments was measured using a 3D electromagnetic motion analysis device. A complete sequence of maneuvers and transfers was then performed that a patient would be expected to go through from the time of injury until surgical fixation. These maneuvers and transfers included spine board placement and removal, bed transfers, lateral therapy, and turning the patient prone onto the operating table. During each of these, the authors performed what they believed to be the most commonly used versus the best techniques for preventing undesirable motion at the injury level. When placing a spine board there was more motion in all 3 planes with the log-roll technique, and this difference reached statistical significance for axial rotation (p = 0.018) and lateral bending (p = 0.003). Using logrolling for spine board removal resulted in increased motion again, and this was statistically significant for flexion-extension (p = 0.014). During the bed transfer and lateral therapy, the log-roll technique resulted in more motion in all 3 planes (p ≤ 0.05). When turning the cadavers prone for surgery there was statistically more angular motion in each plane for manually turning the patient versus the Jackson table turn (p ≤ 0.01). The total motion was decreased by almost 50% in each plane when using an alternative to the log-roll techniques during the complete sequence (p ≤ 0.007). Although it is unknown how much motion in the unstable spine is necessary to cause

  18. Incorporation of a health economic modelling tool into public health commissioning: Evidence use in a politicised context.

    Science.gov (United States)

    Sanders, Tom; Grove, Amy; Salway, Sarah; Hampshaw, Susan; Goyder, Elizabeth

    2017-08-01

    This paper explores how commissioners working in an English local government authority (LA) viewed a health economic decision tool for planning services in relation to diabetes. We conducted 15 interviews and 2 focus groups between July 2015 and February 2016, with commissioners (including public health managers, data analysts and council members). Two overlapping themes were identified explaining the obstacles and enablers of using such a tool in commissioning: a) evidence cultures, and b) system interdependency. The former highlighted the diverse evidence cultures present in the LA with politicians influenced by the 'soft' social care agendas affecting their local population and treating local opinion as evidence, whilst public health managers prioritised the scientific view of evidence informed by research. System interdependency further complicated the decision making process by recognizing interlinking with departments and other disease groups. To achieve legitimacy within the commissioning arena health economic modelling needs to function effectively in a highly politicised environment where decisions are made not only on the basis of research evidence, but on grounds of 'soft' data, personal opinion and intelligence. In this context decisions become politicised, with multiple opinions seeking a voice. The way that such decisions are negotiated and which ones establish authority is of importance. We analyse the data using Larson's (1990) discursive field concept to show how the tool becomes an object of research push and pull likely to be used instrumentally by stakeholders to advance specific agendas, not a means of informing complex decisions. In conclusion, LA decision making is underpinned by a transactional business ethic which is a further potential 'pull' mechanism for the incorporation of health economic modelling in local commissioning. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  19. Development of whole core thermal-hydraulic analysis program ACT. 4. Incorporation of three-dimensional upper plenum model

    International Nuclear Information System (INIS)

    Ohshima, Hiroyuki

    2003-03-01

    The thermal-hydraulic analysis computer program ACT is under development for the evaluation of detailed flow and temperature fields in a core region of fast breeder reactors under various operation conditions. The purpose of this program development is to contribute not only to clarifying thermal hydraulic characteristics that cannot be revealed by experiments due to measurement difficulty but also to performing rational safety design and assessment. This report describes the incorporation of a three-dimensional upper plenum model to ACT and its verification study as part of the program development. To treat the influence of three-dimensional thermal-hydraulic behavior in a upper plenum on the in-core temperature field, the multi-dimensional general purpose thermal-hydraulic analysis program AQUA, which was developed and validated at JNC, was applied as the base of the upper plenum analysis module of ACT. AQUA enables to model the upper plenum configuration including immersed heat exchangers of the direct reactor auxiliary cooling system (DRACS). In coupling core analysis module that consists of the fuel-assembly and the inter-wrapper gap calculation parts with the upper plenum module, different types of computation mesh systems were jointed using the staggered quarter assembly mesh scheme. A coupling algorithm among core, upper plenum and heat transport system modules, which can keep mass, momentum and energy conservation, was developed and optimized in consideration of parallel computing. ACT was applied to analyzing a sodium experiment (PLANDTL-DHX) performed at JNC, which simulated the natural circulation decay heat removal under DRACS operation conditions for the program verification. From the calculation result, the validity of the improved program was confirmed. (author)

  20. Mathematical modeling of (13)C label incorporation of the TCA cycle: the concept of composite precursor function.

    Science.gov (United States)

    Uffmann, Kai; Gruetter, Rolf

    2007-11-15

    A novel approach for the mathematical modeling of (13)C label incorporation into amino acids via the TCA cycle that eliminates the explicit calculation of the labeling of the TCA cycle intermediates is described, resulting in one differential equation per measurable time course of labeled amino acid. The equations demonstrate that both glutamate C4 and C3 labeling depend in a predictable manner on both transmitochondrial exchange rate, V(X), and TCA cycle rate, V(TCA). For example, glutamate C4 labeling alone does not provide any information on either V(X) or V(TCA) but rather a composite "flux". Interestingly, glutamate C3 simultaneously receives label not only from pyruvate C3 but also from glutamate C4, described by composite precursor functions that depend in a probabilistic way on the ratio of V(X) to V(TCA): An initial rate of labeling of glutamate C3 (or C2) being close to zero is indicative of a high V(X)/V(TCA). The derived analytical solution of these equations shows that, when the labeling of the precursor pool pyruvate reaches steady state quickly compared with the turnover rate of the measured amino acids, instantaneous labeling can be assumed for pyruvate. The derived analytical solution has acceptable errors compared with experimental uncertainty, thus obviating precise knowledge on the labeling kinetics of the precursor. In conclusion, a substantial reformulation of the modeling of label flow via the TCA cycle turnover into the amino acids is presented in the current study. This approach allows one to determine metabolic rates by fitting explicit mathematical functions to measured time courses.

  1. Total motion generated in the unstable cervical spine during management of the typical trauma patient: a comparison of methods in a cadaver model.

    Science.gov (United States)

    Prasarn, Mark L; Horodyski, MaryBeth; Dubose, Dewayne; Small, John; Del Rossi, Gianluca; Zhou, Haitao; Conrad, Bryan P; Rechtine, Glenn R

    2012-05-15

    Biomechanical cadaveric study. We sought to analyze the amount of motion generated in the unstable cervical spine during various maneuvers and transfers that a trauma patient would typically be subjected to prior to definitive fixation, using 2 different protocols. From the time of injury until the spine is adequately stabilized in the operating room, every step in management of the spine-injured patient can result in secondary injury to the spinal cord. The amount of angular motion between C5 and C6, after a surgically created unstable injury, was measured using an electromagnetic motion analysis device (Polhemus Inc., Colchester, VT). A total sequence of maneuvers and transfers was then performed that a patient would be expected to go through from the time of injury until surgical fixation. This included spine board placement and removal, bed transfers, lateral therapy, and turning the patient prone onto the operating table. During each of these, we performed what has been shown to be the best and commonly used (log-roll) techniques. During bed transfers and the turn prone for surgery, there was statistically more angular motion in each plane for traditional transfer with the spine board and manually turning the patient prone as commonly done (P patient from the field to stabilization in the operating room using the best compared with the most commonly used techniques. As previously reported, using log-roll techniques consistently results in unwanted motion at the injured spinal segment.

  2. A three-scale model for ionic solute transport in swelling clays incorporating ion-ion correlation effects

    Science.gov (United States)

    Le, Tien Dung; Moyne, Christian; Murad, Marcio A.

    2015-01-01

    A new three-scale model is proposed to describe the movement of ionic species of different valences in swelling clays characterized by three separate length scales (nano, micro, and macro) and two levels of porosity (nano- and micropores). At the finest (nano) scale the medium is treated as charged clay particles saturated by aqueous electrolyte solution containing monovalent and divalent ions forming the electrical double layer. A new constitutive law is constructed for the disjoining pressure based on the numerical resolution of non-local problem at the nanoscale which, in contrast to the Poisson-Boltzmann theory for point charge ions, is capable of capturing the short-range interactions between the ions due to their finite size. At the intermediate scale (microscale), the two-phase homogenized particle/electrolyte solution system is represented by swollen clay clusters (or aggregates) with the nanoscale disjoining pressure incorporated in a modified form of Terzaghi's effective principle. At the macroscale, the electro-chemical-mechanical couplings within clay clusters is homogenized with the ion transport in the bulk fluid lying in the micro pores. The resultant macroscopic picture is governed by a three-scale model wherein ion transport takes place in the bulk solution strongly coupled with the mechanics of the clay clusters which play the role of sources/sinks of mass to the bulk fluid associated with ion adsorption/desorption in the electrical double layer at the nanoscale. Within the context of the quasi-steady version of the multiscale model, wherein the electrolyte solution in the nanopores is assumed at instantaneous thermodynamic equilibrium with the bulk fluid in the micropores, we build-up numerically the ion-adsorption isotherms along with the constitutive law of the retardation coefficients of monovalent and divalent ions. In addition, the constitutive law for the macroscopic swelling pressure is reconstructed numerically showing patterns of

  3. Modeling of oxygen incorporation in Th, ThC, and ThN by density functional theory calculations

    Science.gov (United States)

    Pérez Daroca, D.; Llois, A. M.; Mosca, H. O.

    2017-12-01

    Oxygen incorporation in nuclear fuel materials is an important issue deserving investigation due to its influence on thermophysical and structural properties. Even if there has been a renewed interest in thorium and thorium compounds in the last years, there is still not much research done on this topic. In this work, we study, by means of density functional theory calculations, the incorporation of oxygen in Th, ThC, and ThN. We analyze the electronic structure finding a characteristic peak to be attributed to oxygen incorporation. We also calculate incorporation and solution energies and obtain migration energies of oxygen through different paths finding that migration through vacancy sites is more energetically favorable than through interstitial ones.

  4. Testing typicality in multiverse cosmology

    Science.gov (United States)

    Azhar, Feraz

    2015-05-01

    In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations with actual experimental data. One controversial means of making this comparison is by invoking the "principle of mediocrity": that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, we quantitatively assess the principle of mediocrity in a range of cosmological settings, employing "xerographic distributions" to impose a variety of assumptions regarding typicality. We find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. If, however, one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality does not always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results support the claim that when one has the freedom to consider different combinations of theories and xerographic distributions (or different "frameworks"), one should favor the framework that has the highest posterior probability; and then from this framework one can infer, in particular, how typical we are. In this way, the invocation of the principle of mediocrity is more questionable than has been recently claimed.

  5. Mathematical modelling of oil spill fate and transport in the marine environment incorporating biodegradation kinetics of oil droplets

    Science.gov (United States)

    Spanoudaki, Katerina

    2016-04-01

    Oil biodegradation by native bacteria is one of the most important natural processes that can attenuate the environmental impacts of marine oil spills. However, very few numerical models of oil spill fate and transport include biodegradation kinetics of spilled oil. Furthermore, in models where biodegradation is included amongst the oil transformation processes simulated, it is mostly represented as a first order decay process neglecting the effect of several important parameters that can limit biodegradation rate, such as oil composition and oil droplets-water interface. To this end, the open source numerical model MEDSKIL-II, which simulates oil spill fate and transport in the marine environment, has been modified to include biodegradation kinetics of oil droplets dispersed in the water column. MEDSLIK-II predicts the transport and weathering of oil spills following a Lagrangian approach for the solution of the advection-diffusion equation. Transport is governed by the 3D sea currents and wave field provided by ocean circulation models. In addition to advective and diffusive displacements, the model simulates several physical and chemical processes that transform the oil (evaporation, emulsification, dispersion in the water column, adhesion to coast). The fate algorithms employed in MEDSLIK-II consider the oil as a uniform substance whose properties change as the slick weathers, an approach that can lead to reduced accuracy, especially in the estimation of oil evaporation and biodegradation. Therefore MEDSLIK-II has been modified by adopting the "pseudo-component" approach for simulating weathering processes. Spilled oil is modelled as a relatively small number of discrete, non-interacting components (pseudo-components). Chemicals in the oil mixture are grouped by physical-chemical properties and the resulting pseudo-component behaves as if it were a single substance with characteristics typical of the chemical group. The fate (evaporation, dispersion

  6. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece); Loudos, George [Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris [Medical Information Processing Laboratory (LaTIM), National Institute of Health and Medical Research (INSERM), 29609 Brest (France)

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines

  7. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    International Nuclear Information System (INIS)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-01-01

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  8. Analysis of PWR control rod ejection accident with the coupled code system SKETCH-INS/TRACE by incorporating pin power reconstruction model

    International Nuclear Information System (INIS)

    Nakajima, T.; Sakai, T.

    2010-01-01

    The pin power reconstruction model was incorporated in the 3-D nodal kinetics code SKETCH-INS in order to produce accurate calculation of three-dimensional pin power distributions throughout the reactor core. In order to verify the employed pin power reconstruction model, the PWR MOX/UO_2 core transient benchmark problem was analyzed with the coupled code system SKETCH-INS/TRACE by incorporating the model and the influence of pin power reconstruction model was studied. SKETCH-INS pin power distributions for 3 benchmark problems were compared with the PARCS solutions which were provided by the host organisation of the benchmark. SKETCH-INS results were in good agreement with the PARCS results. The capability of employed pin power reconstruction model was confirmed through the analysis of benchmark problems. A PWR control rod ejection benchmark problem was analyzed with the coupled code system SKETCH-INS/ TRACE by incorporating the pin power reconstruction model. The influence of pin power reconstruction model was studied by comparing with the result of conventional node averaged flux model. The results indicate that the pin power reconstruction model has significant effect on the pin powers during transient and hence on the fuel enthalpy

  9. Simulation of changes in heavy metal contamination in farmland soils of a typical manufacturing center through logistic-based cellular automata modeling.

    Science.gov (United States)

    Qiu, Menglong; Wang, Qi; Li, Fangbai; Chen, Junjian; Yang, Guoyi; Liu, Liming

    2016-01-01

    A customized logistic-based cellular automata (CA) model was developed to simulate changes in heavy metal contamination (HMC) in farmland soils of Dongguan, a manufacturing center in Southern China, and to discover the relationship between HMC and related explanatory variables (continuous and categorical). The model was calibrated through the simulation and validation of HMC in 2012. Thereafter, the model was implemented for the scenario simulation of development alternatives for HMC in 2022. The HMC in 2002 and 2012 was determined through soil tests and cokriging. Continuous variables were divided into two groups by odds ratios. Positive variables (odds ratios >1) included the Nemerow synthetic pollution index in 2002, linear drainage density, distance from the city center, distance from the railway, slope, and secondary industrial output per unit of land. Negative variables (odds ratios <1) included elevation, distance from the road, distance from the key polluting enterprises, distance from the town center, soil pH, and distance from bodies of water. Categorical variables, including soil type, parent material type, organic content grade, and land use type, also significantly influenced HMC according to Wald statistics. The relative operating characteristic and kappa coefficients were 0.91 and 0.64, respectively, which proved the validity and accuracy of the model. The scenario simulation shows that the government should not only implement stricter environmental regulation but also strengthen the remediation of the current polluted area to effectively mitigate HMC.

  10. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  11. Utilising monitoring and modelling of estuarine environments to investigate catchment conditions responsible for stratification events in a typically well-mixed urbanised estuary

    Science.gov (United States)

    Lee, Serena B.; Birch, Gavin F.

    2012-10-01

    Estuarine health is affected by contamination from stormwater, particularly in highly-urbanised environments. For systems where catchment monitoring is insufficient, novel techniques must be employed to determine the impact of urban runoff on receiving water bodies. In the present work, estuarine monitoring and modelling were successfully employed to determine stormwater runoff volumes and establish an appropriate rainfall/runoff relationship capable of replicating fresh-water discharge due to the full range of precipitation conditions in the Sydney Estuary, Australia. Using estuary response to determine relationships between catchment rainfall and runoff is a widely applicable method and may be of assistance in the study of waterways where monitoring fluvial discharges is not practical or is beyond the capacity of management authorities. For the Sydney Estuary, the SCS-CN method replicated rainfall/runoff and was applied in numerical modelling experiments investigating the hydrodynamic characteristics affecting stratification and estuary recovery following high precipitation. Numerical modelling showed stratification in the Sydney Estuary was dominated by fresh-water discharge. Spring tides and up-estuary winds contributed to mixing and neap tides and down-estuary winds enhanced stratification.

  12. Exploring Modeling Options and Conversion of Average Response to Appropriate Vibration Envelopes for a Typical Cylindrical Vehicle Panel with Rib-stiffened Design

    Science.gov (United States)

    Harrison, Phil; LaVerde, Bruce; Teague, David

    2009-01-01

    Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.

  13. Application of fuzzy sets and cognitive maps to incorporate social science scenarios in integrated assessment models: A case study of urbanization in Ujung Pandang, Indonesia.

    NARCIS (Netherlands)

    de Kok, Jean-Luc; Titus, Milan; Wind, H.G.

    2000-01-01

    Decision-support systems in the field of integrated water management could benefit considerably from social science knowledge, as many environmental changes are human-induced. Unfortunately the adequate incorporation of qualitative social science concepts in a quantitative modeling framework is not

  14. Quantum Chemical Examination of the Sequential Halogen Incorporation Scheme for the Modeling of Speciation of I/Br/Cl-Containing Trihalomethanes.

    Science.gov (United States)

    Zhang, Chenyang; Li, Maodong; Han, Xuze; Yan, Mingquan

    2018-02-20

    The recently developed three-step ternary halogenation model interprets the incorporation of chlorine, bromine, and iodine ions into natural organic matter (NOM) and formation of iodine-, bromine-, and chlorine-containing trihalomethanes (THMs) based on the competition of iodine, bromine, and chlorine species at each node of the halogenation sequence. This competition is accounted for using the dimensionless ratios (denoted as γ) of kinetic rates of reactions of the initial attack sites or halogenated intermediates with chlorine, bromine, and iodine ions. However, correlations between the model predictions made and mechanistic aspects of the incorporation of halogen species need to be ascertained in more detail. In this study, quantum chemistry calculations were first used to probe the formation mechanism of 10 species of Cl-/Br-/I- THMs. The HOMO energy (E HOMO ) of each mono-, bi-, or trihalomethanes were calculated by B3LYP method in Gaussian 09 software. Linear correlations were found to exist between the logarithms of experimentally determined kinetic preference coefficients γ reported in prior research and, on the other hand, differences of E HOMO values between brominated/iodinated and chlorinated halomethanes. One notable exception from this trend was that observed for the incorporation of iodine into mono- and di-iodinated intermediates. These observations confirm the three-step halogen incorporation sequence and the factor γ in the statistical model. The combined use of quantum chemistry calculations and the ternary sequential halogenation model provides a new insight into the microscopic nature of NOM-halogen interactions and the trends seen in the behavior of γ factors incorporated in the THM speciation models.

  15. An etiologic prediction model incorporating biomarkers to predict the bladder cancer risk associated with occupational exposure to aromatic amines: a pilot study

    OpenAIRE

    Mastrangelo, Giuseppe; Carta, Angela; Arici, Cecilia; Pavanello, Sofia; Porru, Stefano

    2017-01-01

    Background No etiological prediction model incorporating biomarkers is available to predict bladder cancer risk associated with occupational exposure to aromatic amines. Methods Cases were 199 bladder cancer patients. Clinical, laboratory and genetic data were predictors in logistic regression models (full and short) in which the dependent variable was 1 for 15 patients with aromatic amines related bladder cancer and 0 otherwise. The receiver operating characteristics approach was adopted; th...

  16. Modeling the relationship between landscape characteristics and water quality in a typical highly intensive agricultural small watershed, Dongting lake basin, south central China.

    Science.gov (United States)

    Li, Hongqing; Liu, Liming; Ji, Xiang

    2015-03-01

    Understanding the relationship between landscape characteristics and water quality is critically important for estimating pollution potential and reducing pollution risk. Therefore, this study examines the relationship between landscape characteristics and water quality at both spatial and temporal scales. The study took place in the Jinjing River watershed in 2010; seven landscape types and four water quality pollutions were chosen as analysis parameters. Three different buffer areas along the river were drawn to analyze the relationship as a function of spatial scale. The results of a Pearson's correlation coefficient analysis suggest that "source" landscape, namely, tea gardens, residential areas, and paddy lands, have positive effects on water quality parameters, while forests exhibit a negative influence on water quality parameters because they represent a "sink" landscape and the sub-watershed level is identified as a suitable scale. Using the principal component analysis, tea gardens, residential areas, paddy lands, and forests were identified as the main landscape index. A stepwise multiple regression analysis was employed to model the relationship between landscape characteristics and water quality for each season. The results demonstrate that both landscape composition and configuration affect water quality. In summer and winter, the landscape metrics explained approximately 80.7 % of the variance in the water quality variables, which was higher than that for spring and fall (60.3 %). This study can help environmental managers to understand the relationships between landscapes and water quality and provide landscape ecological approaches for water quality control and land use management.

  17. A family of spatial interaction models incorporating information flows and choice set constraints applied to U.S. interstate labor flows.

    Science.gov (United States)

    Smith, T R; Slater, P B

    1981-01-01

    "A new family of migration models belonging to the elimination by aspects family is examined, with the spatial interaction model shown to be a special case. The models have simple forms; they incorporate information flow processes and choice set constraints; they are free of problems raised by the Luce Choice Axiom; and are capable of generating intransitive flows. Preliminary calibrations using the Continuous Work History Sample [time] series data indicate that the model fits the migration data well, while providing estimates of interstate job message flows. The preliminary calculations also indicate that care is needed in assuming that destination [attraction] are independent of origins." excerpt

  18. Incorporation of Damage and Failure into an Orthotropic Elasto-Plastic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Rajan, Subramaniam; Blankenhorn, Gunther

    2016-01-01

    A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in the composite impact models currently available in LS-DYNA(Registered Trademark) is under development. In particular, the material model, which is being implemented as MAT 213 into a tailored version of LS-DYNA being jointly developed by the FAA and NASA, incorporates both plasticity and damage within the material model, utilizes experimentally based tabulated input to define the evolution of plasticity and damage as opposed to specifying discrete input parameters (such as modulus and strength), and is able to analyze the response of composites composed with a variety of fiber architectures. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. The capability to account for the rate and temperature dependent deformation response of composites has also been incorporated into the material model. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The onset of material failure, and thus element deletion, is being developed to be a function of the stresses and plastic strains in the various coordinate directions. Systematic procedures are being developed to generate the required input parameters based on the results of

  19. Typical IAEA inspection procedures for model plant

    International Nuclear Information System (INIS)

    Theis, W.

    1984-01-01

    This session briefly refers to the legal basis for IAEA inspections and to their objectives. It describes in detail the planning and performance of IAEA inspections, including the examination of records, the comparison of facility records with State reports, flow and inventory verifications, the design of statistical sampling plans, and Agency's independent verification measurements. In addition, the session addresses the principles of Material Balance and MUF evaluation, as well as the content and format of summary statements and related problems

  20. A flexible statistical model for alignment of label-free proteomics data--incorporating ion mobility and product ion information.

    Science.gov (United States)

    Benjamin, Ashlee M; Thompson, J Will; Soderblom, Erik J; Geromanos, Scott J; Henao, Ricardo; Kraus, Virginia B; Moseley, M Arthur; Lucas, Joseph E

    2013-12-16

    The goal of many proteomics experiments is to determine the abundance of proteins in biological samples, and the variation thereof in various physiological conditions. High-throughput quantitative proteomics, specifically label-free LC-MS/MS, allows rapid measurement of thousands of proteins, enabling large-scale studies of various biological systems. Prior to analyzing these information-rich datasets, raw data must undergo several computational processing steps. We present a method to address one of the essential steps in proteomics data processing--the matching of peptide measurements across samples. We describe a novel method for label-free proteomics data alignment with the ability to incorporate previously unused aspects of the data, particularly ion mobility drift times and product ion information. We compare the results of our alignment method to PEPPeR and OpenMS, and compare alignment accuracy achieved by different versions of our method utilizing various data characteristics. Our method results in increased match recall rates and similar or improved mismatch rates compared to PEPPeR and OpenMS feature-based alignment. We also show that the inclusion of drift time and product ion information results in higher recall rates and more confident matches, without increases in error rates. Based on the results presented here, we argue that the incorporation of ion mobility drift time and product ion information are worthy pursuits. Alignment methods should be flexible enough to utilize all available data, particularly with recent advancements in experimental separation methods.

  1. A call for the need to incorporate enterprise risk management as part of the overall business model innovation process

    DEFF Research Database (Denmark)

    Taran, Yariv; Boer, Harry; Lindgren, Peter

    2009-01-01

    Relative to, for example, radical product innovation process, little isknown about business model innovation, let alone the process of managingthe risks involved in that process. Using the emerging Enterprise RiskManagement (ERM) literature, an approach is proposed through whichrisk management can...... be embedded in the business model innovationprocess. The integrated risk management/business model innovationprocess model has been tested through an action research study in aDanish company. The results are promising and warrant continuation ofthe development of that model....

  2. Modified lingguizhugan decoction incorporated with dietary restriction and exercise ameliorates hyperglycemia, hyperlipidemia and hypertension in a rat model of the metabolic syndrome.

    Science.gov (United States)

    Yao, Limei; Wei, Jingjing; Shi, Si; Guo, Kunbin; Wang, Xiangyu; Wang, Qi; Chen, Dingsheng; Li, Weirong

    2017-02-28

    Modified Lingguizhugan Decoction (MLD) came from famous Chinese medicine Linggui Zhugan Decoction. The MLD is used for the treatment of metabolic syndrome in the clinical setting. Our study focuses on the comprehensive treatment of MLD incorporated with dietary restriction and exercise in a rat model of the metabolic syndrome (MS). Rats were divided into five groups: control group (Cont), high-fat diet group (HFD), high-fat diet incorporated with dietary restriction group (HFD-DR), exercise incorporated with dietary restriction group (HFD-DR-Ex) and MLD incorporated with dietary restriction and exercise group (HFD-DR-Ex-MLD). Treatments were conducted for 1 week after feeding high-fat diet for 12 weeks. The effects of treatments on high fat diet-induced obesity, hyperglycemia, hyperlipidemia, hypertension, hepatic injury and insulin resistance in rats of MS were examined. In addition, the tumor necrosis factor-α (TNF-α), leptin and protein kinase B (PKB) in rats serum and liver were also examined by enzyme-linked immunosorbent assay (ELISA). After a week's intervention by dietary restriction, dietary restriction incorporated with exercise or MLD, compared with HFD rats, the relative weight of liver and fat, levels of triglyceride, total cholesterol, low-density lipoprotein, free fatty acid, aspartate aminotransferase, glutamic-pyruvic transaminase and alkaline phosphatase, insulin, were significantly decreased (p exercise treatment exhibit effects in alleviating high-fat diet-induced obesity, hyperglycemia, hyperlipidemia, hypertension, hepatic injury and insulin resistance, which are possibly due to the down-regulation of TNF-α, leptin and PKB.

  3. Incorporating Born solvation energy into the three-dimensional Poisson-Nernst-Planck model to study ion selectivity in KcsA K+ channels

    Science.gov (United States)

    Liu, Xuejiao; Lu, Benzhuo

    2017-12-01

    Potassium channels are much more permeable to potassium than sodium ions, although potassium ions are larger and both carry the same positive charge. This puzzle cannot be solved based on the traditional Poisson-Nernst-Planck (PNP) theory of electrodiffusion because the PNP model treats all ions as point charges, does not incorporate ion size information, and therefore cannot discriminate potassium from sodium ions. The PNP model can qualitatively capture some macroscopic properties of certain channel systems such as current-voltage characteristics, conductance rectification, and inverse membrane potential. However, the traditional PNP model is a continuum mean-field model and has no or underestimates the discrete ion effects, in particular the ion solvation or self-energy (which can be described by Born model). It is known that the dehydration effect (closely related to ion size) is crucial to selective permeation in potassium channels. Therefore, we incorporated Born solvation energy into the PNP model to account for ion hydration and dehydration effects when passing through inhomogeneous dielectric channel environments. A variational approach was adopted to derive a Born-energy-modified PNP (BPNP) model. The model was applied to study a cylindrical nanopore and a realistic KcsA channel, and three-dimensional finite element simulations were performed. The BPNP model can distinguish different ion species by ion radius and predict selectivity for K+ over Na+ in KcsA channels. Furthermore, ion current rectification in the KcsA channel was observed by both the PNP and BPNP models. The I -V curve of the BPNP model for the KcsA channel indicated an inward rectifier effect for K+ (rectification ratio of ˜3 /2 ) but indicated an outward rectifier effect for Na+ (rectification ratio of ˜1 /6 ) .

  4. Incorporation of the equilibrium temperature approach in a Soil and Water Assessment Tool hydroclimatological stream temperature model

    Science.gov (United States)

    Du, Xinzhong; Shrestha, Narayan Kumar; Ficklin, Darren L.; Wang, Junye

    2018-04-01

    Stream temperature is an important indicator for biodiversity and sustainability in aquatic ecosystems. The stream temperature model currently in the Soil and Water Assessment Tool (SWAT) only considers the impact of air temperature on stream temperature, while the hydroclimatological stream temperature model developed within the SWAT model considers hydrology and the impact of air temperature in simulating the water-air heat transfer process. In this study, we modified the hydroclimatological model by including the equilibrium temperature approach to model heat transfer processes at the water-air interface, which reflects the influences of air temperature, solar radiation, wind speed and streamflow conditions on the heat transfer process. The thermal capacity of the streamflow is modeled by the variation of the stream water depth. An advantage of this equilibrium temperature model is the simple parameterization, with only two parameters added to model the heat transfer processes. The equilibrium temperature model proposed in this study is applied and tested in the Athabasca River basin (ARB) in Alberta, Canada. The model is calibrated and validated at five stations throughout different parts of the ARB, where close to monthly samplings of stream temperatures are available. The results indicate that the equilibrium temperature model proposed in this study provided better and more consistent performances for the different regions of the ARB with the values of the Nash-Sutcliffe Efficiency coefficient (NSE) greater than those of the original SWAT model and the hydroclimatological model. To test the model performance for different hydrological and environmental conditions, the equilibrium temperature model was also applied to the North Fork Tolt River Watershed in Washington, United States. The results indicate a reasonable simulation of stream temperature using the model proposed in this study, with minimum relative error values compared to the other two models

  5. An etiologic prediction model incorporating biomarkers to predict the bladder cancer risk associated with occupational exposure to aromatic amines: a pilot study.

    Science.gov (United States)

    Mastrangelo, Giuseppe; Carta, Angela; Arici, Cecilia; Pavanello, Sofia; Porru, Stefano

    2017-01-01

    No etiological prediction model incorporating biomarkers is available to predict bladder cancer risk associated with occupational exposure to aromatic amines. Cases were 199 bladder cancer patients. Clinical, laboratory and genetic data were predictors in logistic regression models (full and short) in which the dependent variable was 1 for 15 patients with aromatic amines related bladder cancer and 0 otherwise. The receiver operating characteristics approach was adopted; the area under the curve was used to evaluate discriminatory ability of models. Area under the curve was 0.93 for the full model (including age, smoking and coffee habits, DNA adducts, 12 genotypes) and 0.86 for the short model (including smoking, DNA adducts, 3 genotypes). Using the "best cut-off" of predicted probability of a positive outcome, percentage of cases correctly classified was 92% (full model) against 75% (short model). Cancers classified as "positive outcome" are those to be referred for evaluation by an occupational physician for etiological diagnosis; these patients were 28 (full model) or 60 (short model). Using 3 genotypes instead of 12 can double the number of patients with suspect of aromatic amine related cancer, thus increasing costs of etiologic appraisal. Integrating clinical, laboratory and genetic factors, we developed the first etiologic prediction model for aromatic amine related bladder cancer. Discriminatory ability was excellent, particularly for the full model, allowing individualized predictions. Validation of our model in external populations is essential for practical use in the clinical setting.

  6. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  7. Tritium in some typical ecosystems

    International Nuclear Information System (INIS)

    1981-01-01

    The environmental significance of 3 H releases prompted an IAEA-sponsored coordinated research programme on various aspects. Data were collected to help health physicists, radioecologists, radiobiologists and environmentalists to predict the behaviour of 3 H in the major terrestrial ecosystems of the world. A common methodology was used to carry out a variety of projects in widely varying biomes, from tropical to arctic regions: in Belgium, on terrestrial food chains, with deposition of tritiated water (HTO) on crops and pasture, and incorporation of 3 H into proteins, nucleic acids, etc.; in Finland, plots of pasture and forest were labelled by HTO, and plant uptake were studied; in France, 3 H-content in water, in relation to different parts of vines, orange and olive trees in a Mediterranean climate; in the Federal Republic of Germany, contamination due to 3 H-releases; in India, mean 3 H-residence time in some tropical trees; in Mexico, 3 H-persistence as free-water 3 H and tissue-bound 3 H in crops; in the Netherlands, 3 H-metabolism in ruminants; in the Philippines, residence time in soil and in various commonly edible crops, and excretion time; in Thailand, half residence time in soil and local vegetation; in the USA, the effects of HTO vapour and liquid exposure in a wide range of climatic conditions, including organic fixation and concentration factors. An extensive bibliography is attached, and also annexes of laboratories and project titles; plant species, exposure and residence times; comparable lists for animals studied; scientific and common names of the species, and a glossary

  8. Modelling of Spring Constant and Pull-down Voltage of Non-uniform RF MEMS Cantilever Incorporating Stress Gradient

    Directory of Open Access Journals (Sweden)

    Shimul Chandra SAHA

    2008-11-01

    Full Text Available We have presented a model for spring constant and pull-down voltage of a non-uniform radio frequency microelectromechanical systems (RF MEMS cantilever that works on electrostatic actuation. The residual stress gradient in the beam material that may arise during the fabrication process is also considered in the model. Using basic force deflection calculation of the suspended beam, a stand-alone model for the spring constant and pull-down voltage of the non-uniform cantilever is developed. To compare the model, simulation is performed using standard Finite Element Method (FEM analysis tolls from CoventorWare. The model matches very well with the FEM simulation results. The model will offer an efficient means of design, analysis, and optimization of RF MEMS cantilever switches.

  9. Results from a new mathematical model of gastrointestinal transit that incorporates age and gender-dependent physiological parameters

    International Nuclear Information System (INIS)

    Stubbs, J.B.

    1992-01-01

    Recently published data on effects of age and gender-dependent GI physiology and motility have been used to develop a new mathematical model describing the transit and adsorption of substances through the GI tract. This mathematical description of GI tract kinetics utilises more physiologically accurate transit processes than the ICRP Report 30 GI model. The model uses a combination of zero and first-order kinetics to describe motility. Some of the physiological parameters that the new model uses are gender, age, phase of the menstrual cycle, meal composition and gastric phase (solid versus liquid). A computer algorithm based on this model has been derived and results for young males are compared to those of the ICRP 30 model. Comparison of gastrointestinal residence times for 99 Tc m and 111 In labelled compounds, as a function of gender and age, are also presented. (author)

  10. Results from a new mathematical model of gastrointestinal transit that incorporates age and gender-dependent physiological parameters

    Energy Technology Data Exchange (ETDEWEB)

    Stubbs, J B [Oak Ridge Associated Universities, Inc., TN (United States). Medical and Health Science Div.

    1992-01-01

    Recently published data on effects of age and gender-dependent GI physiology and motility have been used to develop a new mathematical model describing the transit and adsorption of substances through the GI tract. This mathematical description of GI tract kinetics utilises more physiologically accurate transit processes than the ICRP Report 30 GI model. The model uses a combination of zero and first-order kinetics to describe motility. Some of the physiological parameters that the new model uses are gender, age, phase of the menstrual cycle, meal composition and gastric phase (solid versus liquid). A computer algorithm based on this model has been derived and results for young males are compared to those of the ICRP 30 model. Comparison of gastrointestinal residence times for {sup 99}Tc{sup m} and {sup 111}In labelled compounds, as a function of gender and age, are also presented. (author).

  11. The Effect of Climate Change on Wetlands and Waterfowl in Western Canada: Incorporating Cropping Decisions into a Bioeconomic Model

    NARCIS (Netherlands)

    Withey, P.; Kooten, van G.C.

    2013-01-01

    We extend an earlier bioeconomic model of optimal duck harvest and wetland retention in the Prairie Pothole Region of Western Canada to include cropping decisions. Instead of a single state equation, the model has two state equations representing the population dynamics of ducks and the amount of

  12. A ternary phase-field model incorporating commercial CALPHAD software and its application to precipitation in superalloys

    International Nuclear Information System (INIS)

    Wen, Y.H.; Lill, J.V.; Chen, S.L.; Simmons, J.P.

    2010-01-01

    A ternary phase-field model was developed that is linked directly to commercial CALPHAD software to provide quantitative thermodynamic driving forces. A recently available diffusion mobility database for ordered phases is also implemented to give a better description of the diffusion behavior in alloys. Because the targeted application of this model is the study of precipitation in Ni-based superalloys, a Ni-Al-Cr model alloy was constructed. A detailed description of this model is given in the paper. We have considered the misfit effects of the partitioning of the two solute elements. Transformation rules of the dual representation of the γ+γ ' microstructure by CALPHAD and by the phase field are established and the link with commercial CALPHAD software is described. Proof-of-concept tests were performed to evaluate the model and the results demonstrate that the model can qualitatively reproduce observed γ ' precipitation behavior. Uphill diffusion of Al is observed in a few diffusion couples, showing the significant influence of Cr on the chemical potential of Al. Possible applications of this model are discussed.

  13. Incorporation of groundwater losses and well level data in rainfall-runoff models illustrated using the PDM

    Directory of Open Access Journals (Sweden)

    R. J. Moore

    2002-01-01

    Full Text Available Intermittent streamflow is a common occurrence in permeable catchments, especially where there are pumped abstractions to water supply. Many rainfall-runoff models are not formulated so as to represent ephemeral streamflow behaviour or to allow for the possibility of negative recharge arising from groundwater pumping. A groundwater model component is formulated here for use in extending existing rainfall-runoff models to accommodate such ephemeral behaviour. Solutions to the Horton-Izzard equation resulting from the conceptual model of groundwater storage are adapted and the form of nonlinear storage extended to accommodate negative inputs, water storage below which outflow ceases, and losses to external springs and underflows below the gauged catchment outlet. The groundwater model component is demonstrated through using it as an extension of the PDM rainfall-runoff model. It is applied to the River Lavant, a catchment in Southern England on the English Chalk, where it successfully simulates the ephemeral streamflow behaviour and flood response together with well level variations. Keywords: groundwater, rainfall-runoff model, ephemeral stream, well level, spring, abstraction

  14. Stored object knowledge and the production of referring expressions: The case of color typicality

    Directory of Open Access Journals (Sweden)

    Hans eWesterbeek

    2015-07-01

    Full Text Available When speakers describe objects with atypical properties, do they include these properties in their referring expressions, even when that is not strictly required for unique referent identification? Based on previous work, we predict that speakers mention the color of a target object more often when the object is atypically colored, compared to when it is typical. Taking literature from object recognition and visual attention into account, we further hypothesize that this behavior is proportional to the degree to which a color is atypical, and whether color is a highly diagnostic feature in the referred-to object's identity. We investigate these expectations in two language production experiments, in which participants referred to target objects in visual contexts. In Experiment 1, we find a strong effect of color typicality: less typical colors for target objects predict higher proportions of referring expressions that include color. In Experiment 2 we manipulated objects with more complex shapes, for which color is less diagnostic, and we find that the color typicality effect is moderated by color diagnosticity: it is strongest for high-color-diagnostic objects (i.e., objects with a simple shape. These results suggest that the production of atypical color attributes results from a contrast with stored knowledge, an effect which is stronger when color is more central to object identification. Our findings offer evidence for models of reference production that incorporate general object knowledge, in order to be able to capture these effects of typicality on determining the content of referring expressions.

  15. Viscoplastic equations incorporated into a finite element model to predict deformation behavior of irradiated reduced activation ferritic/martensitic steel

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuanyuan, E-mail: 630wyy@163.com [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), Dalian University of Technology, Dalian 116024 (China); Zhao, Jijun, E-mail: zhaojj@dlut.edu.cn [Key Laboratory of Materials Modification by Laser, Ion and Electron Beams (Ministry of Education), Dalian University of Technology, Dalian 116024 (China); Zhang, Chi [Key Laboratory of Advanced Materials of Ministry of Education, School of Materials Science and Engineering, Tsinghua University, Beijing 100084 (China)

    2017-05-15

    Highlights: • The initial internal variable in the Anand model is modified by considering both temperature and irradiation dose. • The tensile stress-strain response is examined and analyzed under different temperatures and irradiation doses. • Yield strengths are predicted as functions of strain rate, temperature and irradiation dose. - Abstract: The viscoplastic equations with a modified initial internal variable are implemented into the finite element code to investigate stress-strain response and irradiation hardening of the materials under increased temperature and at different levels of irradiated dose. We applied this model to Mod 9Cr-1Mo steel. The predicted results are validated by the experimentally measured data. Furthermore, they show good agreement with the previous data from a constitutive crystal plasticity model in account of dislocation and interstitial loops. Three previous hardening models for predicting the yield strength of the material are discussed and compared with our simulation results.

  16. A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology.

    Science.gov (United States)

    Freedman, Laurence S; Midthune, Douglas; Dodd, Kevin W; Carroll, Raymond J; Kipnis, Victor

    2015-11-30

    Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important. Copyright © 2015 John Wiley & Sons, Ltd.

  17. The Brain’s sense of walking: a study on the intertwine between locomotor imagery and internal locomotor models in healthy adults, typically developing children and children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Marco eIosa

    2014-10-01

    Full Text Available Motor imagery and internal motor models have been deeply investigated in literature. It is well known that the development of motor imagery occurs during adolescence and it is limited in people affected by cerebral palsy. However, the roles of motor imagery and internal models in locomotion as well as their intertwine received poor attention. In this study we compared the performances of healthy adults (n=8, 28.1±5.1 years old, children with typical development (n=8, 8.1±3.8 years old and children with cerebral palsy (n=12, 7.5±2.9 years old, measured by an optoelectronic system and a trunk-mounted wireless inertial magnetic unit, during three different tasks. Subjects were asked to achieve a target located at 2 or 3m in front of them simulating their walking by stepping in place, or actually walking blindfolded or normally walking with open eyes. Adults performed a not significantly different number of steps (p=0.761 spending not significantly different time between tasks (p=0.156. Children with typical development showed task-dependent differences both in terms of number of steps (p=0.046 and movement time (p=0.002. However, their performance in simulated and blindfolded walking were strictly correlated (R=0.871 for steps, R=0.673 for time. Further, their error in blindfolded walking was in mean only of -2.2% of distance. Also children with cerebral palsy showed significant differences in number of steps (p=0.022 and time (p<0.001, but neither their number of steps nor their movement time recorded during simulated walking were found correlated with those of blindfolded and normal walking. Adults used a unique strategy among different tasks. Children with typical development seemed to be less reliable on their motor predictions, using a task-dependent strategy probably more reliable on sensorial feedback. Children with cerebral palsy showed less efficient performances, especially in simulated walking, suggesting an altered locomotor imagery.

  18. Incorporating Anthropogenic Influences into Fire Probability Models: Effects of Human Activity and Climate Change on Fire Activity in California.

    Science.gov (United States)

    Mann, Michael L; Batllori, Enric; Moritz, Max A; Waller, Eric K; Berck, Peter; Flint, Alan L; Flint, Lorraine E; Dolfi, Emmalee

    2016-01-01

    The costly interactions between humans and wildfires throughout California demonstrate the need to understand the relationships between them, especially in the face of a changing climate and expanding human communities. Although a number of statistical and process-based wildfire models exist for California, there is enormous uncertainty about the location and number of future fires, with previously published estimates of increases ranging from nine to fifty-three percent by the end of the century. Our goal is to assess the role of climate and anthropogenic influences on the state's fire regimes from 1975 to 2050. We develop an empirical model that integrates estimates of biophysical indicators relevant to plant communities and anthropogenic influences at each forecast time step. Historically, we find that anthropogenic influences account for up to fifty percent of explanatory power in the model. We also find that the total area burned is likely to increase, with burned area expected to increase by 2.2 and 5.0 percent by 2050 under climatic bookends (PCM and GFDL climate models, respectively). Our two climate models show considerable agreement, but due to potential shifts in rainfall patterns, substantial uncertainty remains for the semiarid inland deserts and coastal areas of the south. Given the strength of human-related variables in some regions, however, it is clear that comprehensive projections of future fire activity should include both anthropogenic and biophysical influences. Previous findings of substantially increased numbers of fires and burned area for California may be tied to omitted variable bias from the exclusion of human influences. The omission of anthropogenic variables in our model would overstate the importance of climatic ones by at least 24%. As such, the failure to include anthropogenic effects in many models likely overstates the response of wildfire to climatic change.

  19. Incorporating Anthropogenic Influences into Fire Probability Models: Effects of Human Activity and Climate Change on Fire Activity in California.

    Directory of Open Access Journals (Sweden)

    Michael L Mann

    Full Text Available The costly interactions between humans and wildfires throughout California demonstrate the need to understand the relationships between them, especially in the face of a changing climate and expanding human communities. Although a number of statistical and process-based wildfire models exist for California, there is enormous uncertainty about the location and number of future fires, with previously published estimates of increases ranging from nine to fifty-three percent by the end of the century. Our goal is to assess the role of climate and anthropogenic influences on the state's fire regimes from 1975 to 2050. We develop an empirical model that integrates estimates of biophysical indicators relevant to plant communities and anthropogenic influences at each forecast time step. Historically, we find that anthropogenic influences account for up to fifty percent of explanatory power in the model. We also find that the total area burned is likely to increase, with burned area expected to increase by 2.2 and 5.0 percent by 2050 under climatic bookends (PCM and GFDL climate models, respectively. Our two climate models show considerable agreement, but due to potential shifts in rainfall patterns, substantial uncertainty remains for the semiarid inland deserts and coastal areas of the south. Given the strength of human-related variables in some regions, however, it is clear that comprehensive projections of future fire activity should include both anthropogenic and biophysical influences. Previous findings of substantially increased numbers of fires and burned area for California may be tied to omitted variable bias from the exclusion of human influences. The omission of anthropogenic variables in our model would overstate the importance of climatic ones by at least 24%. As such, the failure to include anthropogenic effects in many models likely overstates the response of wildfire to climatic change.

  20. A two-dimensional continuum model of biofilm growth incorporating fluid flow and shear stress based detachment

    KAUST Repository

    Duddu, Ravindra

    2009-05-01

    We present a two-dimensional biofilm growth model in a continuum framework using an Eulerian description. A computational technique based on the eXtended Finite Element Method (XFEM) and the level set method is used to simulate the growth of the biofilm. The model considers fluid flow around the biofilm surface, the advection-diffusion and reaction of substrate, variable biomass volume fraction and erosion due to the interfacial shear stress at the biofilm-fluid interface. The key assumptions of the model and the governing equations of transport, biofilm kinetics and biofilm mechanics are presented. Our 2D biofilm growth results are in good agreement with those obtained by Picioreanu et al. (Biotechnol Bioeng 69(5):504-515, 2000). Detachment due to erosion is modeled using two continuous speed functions based on: (a) interfacial shear stress and (b) biofilm height. A relation between the two detachment models in the case of a 1D biofilm is established and simulated biofilm results with detachment in 2D are presented. The stress in the biofilm due to fluid flow is evaluated and higher stresses are observed close to the substratum where the biofilm is attached. © 2008 Wiley Periodicals, Inc.

  1. Incorporation of the Time-Varying Postprandial Increase in Splanchnic Blood Flow into a PBPK Model to Predict the Effect of Food on the Pharmacokinetics of Orally Administered High-Extraction Drugs.

    Science.gov (United States)

    Rose, Rachel H; Turner, David B; Neuhoff, Sibylle; Jamei, Masoud

    2017-07-01

    Following a meal, a transient increase in splanchnic blood flow occurs that can result in increased exposure to orally administered high-extraction drugs. Typically, physiologically based pharmacokinetic (PBPK) models have incorporated this increase in blood flow as a time-invariant fed/fasted ratio, but this approach is unable to explain the extent of increased drug exposure. A model for the time-varying increase in splanchnic blood flow following a moderate- to high-calorie meal (TV-Q Splanch ) was developed to describe the observed data for healthy individuals. This was integrated within a PBPK model and used to predict the contribution of increased splanchnic blood flow to the observed food effect for two orally administered high-extraction drugs, propranolol and ibrutinib. The model predicted geometric mean fed/fasted AUC and C max ratios of 1.24 and 1.29 for propranolol, which were within the range of published values (within 1.0-1.8-fold of values from eight clinical studies). For ibrutinib, the predicted geometric mean fed/fasted AUC and C max ratios were 2.0 and 1.84, respectively, which was within 1.1-fold of the reported fed/fasted AUC ratio but underestimated the reported C max ratio by up to 1.9-fold. For both drugs, the interindividual variability in fed/fasted AUC and C max ratios was underpredicted. This suggests that the postprandial change in splanchnic blood flow is a major mechanism of the food effect for propranolol and ibrutinib but is insufficient to fully explain the observations. The proposed model is anticipated to improve the prediction of food effect for high-extraction drugs, but should be considered with other mechanisms.

  2. Incorporating interspecific competition into species-distribution mapping by upward scaling of small-scale model projections to the landscape.

    Directory of Open Access Journals (Sweden)

    Mark Baah-Acheamfour

    Full Text Available There are a number of overarching questions and debate in the scientific community concerning the importance of biotic interactions in species distribution models at large spatial scales. In this paper, we present a framework for revising the potential distribution of tree species native to the Western Ecoregion of Nova Scotia, Canada, by integrating the long-term effects of interspecific competition into an existing abiotic-factor-based definition of potential species distribution (PSD. The PSD model is developed by combining spatially explicit data of individualistic species' response to normalized incident photosynthetically active radiation, soil water content, and growing degree days. A revised PSD model adds biomass output simulated over a 100-year timeframe with a robust forest gap model and scaled up to the landscape using a forestland classification technique. To demonstrate the method, we applied the calculation to the natural range of 16 target tree species as found in 1,240 provincial forest-inventory plots. The revised PSD model, with the long-term effects of interspecific competition accounted for, predicted that eastern hemlock (Tsuga canadensis, American beech (Fagus grandifolia, white birch (Betula papyrifera, red oak (Quercus rubra, sugar maple (Acer saccharum, and trembling aspen (Populus tremuloides would experience a significant decline in their original distribution compared with balsam fir (Abies balsamea, black spruce (Picea mariana, red spruce (Picea rubens, red maple (Acer rubrum L., and yellow birch (Betula alleghaniensis. True model accuracy improved from 64.2% with original PSD evaluations to 81.7% with revised PSD. Kappa statistics slightly increased from 0.26 (fair to 0.41 (moderate for original and revised PSDs, respectively.

  3. Incorporation of a physically based melt pond scheme into the sea ice component of a climate model

    OpenAIRE

    Flocco, Daniela; Feltham, Danny; Turner, Adrian K.

    2010-01-01

    The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2005 and 2007. These minima have not been predicted in the IPCC AR4 report, suggesting that the sea ice component of climate models should more realistically represent the processes controlling the sea ice mass balance. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds accumulate on t...

  4. Incorporating cold-air pooling into downscaled climate models increases potential refugia for snow-dependent species within the Sierra Nevada Ecoregion, CA.

    Directory of Open Access Journals (Sweden)

    Jennifer A Curtis

    Full Text Available We present a unique water-balance approach for modeling snowpack under historic, current and future climates throughout the Sierra Nevada Ecoregion. Our methodology uses a finer scale (270 m than previous regional studies and incorporates cold-air pooling, an atmospheric process that sustains cooler temperatures in topographic depressions thereby mitigating snowmelt. Our results are intended to support management and conservation of snow-dependent species, which requires characterization of suitable habitat under current and future climates. We use the wolverine (Gulo gulo as an example species and investigate potential habitat based on the depth and extent of spring snowpack within four National Park units with proposed wolverine reintroduction programs. Our estimates of change in spring snowpack conditions under current and future climates are consistent with recent studies that generally predict declining snowpack. However, model development at a finer scale and incorporation of cold-air pooling increased the persistence of April 1st snowpack. More specifically, incorporation of cold-air pooling into future climate projections increased April 1st snowpack by 6.5% when spatially averaged over the study region and the trajectory of declining April 1st snowpack reverses at mid-elevations where snow pack losses are mitigated by topographic shading and cold-air pooling. Under future climates with sustained or increased precipitation, our results indicate a high likelihood for the persistence of late spring snowpack at elevations above approximately 2,800 m and identify potential climate refugia sites for snow-dependent species at mid-elevations, where significant topographic shading and cold-air pooling potential exist.

  5. Incorporating water-release and lateral protein interactions in modeling equilibrium adsorption for ion-exchange chromatography.

    Science.gov (United States)

    Thrash, Marvin E; Pinto, Neville G

    2006-09-08

    The equilibrium adsorption of two albumin proteins on a commercial ion exchanger has been studied using a colloidal model. The model accounts for electrostatic and van der Waals forces between proteins and the ion exchanger surface, the energy of interaction between adsorbed proteins, and the contribution of entropy from water-release accompanying protein adsorption. Protein-surface interactions were calculated using methods previously reported in the literature. Lateral interactions between adsorbed proteins were experimentally measured with microcalorimetry. Water-release was estimated by applying the preferential interaction approach to chromatographic retention data. The adsorption of ovalbumin and bovine serum albumin on an anion exchanger at solution pH>pI of protein was measured. The experimental isotherms have been modeled from the linear region to saturation, and the influence of three modulating alkali chlorides on capacity has been evaluated. The heat of adsorption is endothermic for all cases studied, despite the fact that the net charge on the protein is opposite that of the adsorbing surface. Strong repulsive forces between adsorbed proteins underlie the endothermic heat of adsorption, and these forces intensify with protein loading. It was found that the driving force for adsorption is the entropy increase due to the release of water from the protein and adsorbent surfaces. It is shown that the colloidal model predicts protein adsorption capacity in both the linear and non-linear isotherm regions, and can account for the effects of modulating salt.

  6. A two-dimensional continuum model of biofilm growth incorporating fluid flow and shear stress based detachment

    KAUST Repository

    Duddu, Ravindra; Chopp, David L.; Moran, Brian

    2009-01-01

    of the biofilm. The model considers fluid flow around the biofilm surface, the advection-diffusion and reaction of substrate, variable biomass volume fraction and erosion due to the interfacial shear stress at the biofilm-fluid interface. The key assumptions

  7. Incorporating Modeling and Simulations in Undergraduate Biophysical Chemistry Course to Promote Understanding of Structure-Dynamics-Function Relationships in Proteins

    Science.gov (United States)

    Hati, Sanchita; Bhattacharyya, Sudeep

    2016-01-01

    A project-based biophysical chemistry laboratory course, which is offered to the biochemistry and molecular biology majors in their senior year, is described. In this course, the classroom study of the structure-function of biomolecules is integrated with the discovery-guided laboratory study of these molecules using computer modeling and…

  8. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    Science.gov (United States)

    Tan, Yimin; Lin, Kejian; Zu, Jean W.

    2018-05-01

    Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  9. Incorporating time and income constraints in dynamic agent-based models of activity generation and time use : Approach and illustration

    NARCIS (Netherlands)

    Arentze, Theo; Ettema, D.F.; Timmermans, Harry

    Existing theories and models in economics and transportation treat households’ decisions regarding allocation of time and income to activities as a resource-allocation optimization problem. This stands in contrast with the dynamic nature of day-by-day activity-travel choices. Therefore, in the

  10. Incorporating insights from Time Series Analysis in groundwater modelling for the urban area of the city of Amsterdam

    NARCIS (Netherlands)

    Graafstra, P.; Smits, F.J.C.; Janse, T.

    2017-01-01

    As the public water authority of the city of Amsterdam and surrounding areas, Waternet makes use of both steady-state and transient groundwater models for a variety of purposes involving urban groundwater management. For instance when determining the effect of planned measures on the occurrence of

  11. Analytical modelling of Halbach linear generator incorporating pole shifting and piece-wise spring for ocean wave energy harvesting

    Directory of Open Access Journals (Sweden)

    Yimin Tan

    2018-05-01

    Full Text Available Halbach permanent magnet (PM array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.

  12. Incorporating field wind data into FIRETEC simulations of the International Crown Fire Modeling Experiment (ICFME): preliminary lessons learned

    Science.gov (United States)

    Rodman Linn; Kerry Anderson; Judith Winterkamp; Alyssa Broos; Michael Wotton; Jean-Luc Dupuy; Francois Pimont; Carleton Edminster

    2012-01-01

    Field experiments are one way to develop or validate wildland fire-behavior models. It is important to consider the implications of assumptions relating to the locality of measurements with respect to the fire, the temporal frequency of the measured data, and the changes to local winds that might be caused by the experimental configuration. Twenty FIRETEC simulations...

  13. Incorporating Cancer Stem Cells in Radiation Therapy Treatment Response Modeling and the Implication in Glioblastoma Multiforme Treatment Resistance

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Nguyen, Dan; Pajonk, Frank; Kupelian, Patrick; Kaprealian, Tania; Selch, Michael; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu

    2015-03-15

    Purpose: To perform a preliminary exploration with a simplistic mathematical cancer stem cell (CSC) interaction model to determine whether the tumor-intrinsic heterogeneity and dynamic equilibrium between CSCs and differentiated cancer cells (DCCs) can better explain radiation therapy treatment response with a dual-compartment linear-quadratic (DLQ) model. Methods and Materials: The radiosensitivity parameters of CSCs and DCCs for cancer cell lines including glioblastoma multiforme (GBM), non–small cell lung cancer, melanoma, osteosarcoma, and prostate, cervical, and breast cancer were determined by performing robust least-square fitting using the DLQ model on published clonogenic survival data. Fitting performance was compared with the single-compartment LQ (SLQ) and universal survival curve models. The fitting results were then used in an ordinary differential equation describing the kinetics of DCCs and CSCs in response to 2- to 14.3-Gy fractionated treatments. The total dose to achieve tumor control and the fraction size that achieved the least normal biological equivalent dose were calculated. Results: Smaller cell survival fitting errors were observed using DLQ, with the exception of melanoma, which had a low α/β = 0.16 in SLQ. Ordinary differential equation simulation indicated lower normal tissue biological equivalent dose to achieve the same tumor control with a hypofractionated approach for 4 cell lines for the DLQ model, in contrast to SLQ, which favored 2 Gy per fraction for all cells except melanoma. The DLQ model indicated greater tumor radioresistance than SLQ, but the radioresistance was overcome by hypofractionation, other than the GBM cells, which responded poorly to all fractionations. Conclusion: The distinct radiosensitivity and dynamics between CSCs and DCCs in radiation therapy response could perhaps be one possible explanation for the heterogeneous intertumor response to hypofractionation and in some cases superior outcome from

  14. Incorporating Cancer Stem Cells in Radiation Therapy Treatment Response Modeling and the Implication in Glioblastoma Multiforme Treatment Resistance

    International Nuclear Information System (INIS)

    Yu, Victoria Y.; Nguyen, Dan; Pajonk, Frank; Kupelian, Patrick; Kaprealian, Tania; Selch, Michael; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: To perform a preliminary exploration with a simplistic mathematical cancer stem cell (CSC) interaction model to determine whether the tumor-intrinsic heterogeneity and dynamic equilibrium between CSCs and differentiated cancer cells (DCCs) can better explain radiation therapy treatment response with a dual-compartment linear-quadratic (DLQ) model. Methods and Materials: The radiosensitivity parameters of CSCs and DCCs for cancer cell lines including glioblastoma multiforme (GBM), non–small cell lung cancer, melanoma, osteosarcoma, and prostate, cervical, and breast cancer were determined by performing robust least-square fitting using the DLQ model on published clonogenic survival data. Fitting performance was compared with the single-compartment LQ (SLQ) and universal survival curve models. The fitting results were then used in an ordinary differential equation describing the kinetics of DCCs and CSCs in response to 2- to 14.3-Gy fractionated treatments. The total dose to achieve tumor control and the fraction size that achieved the least normal biological equivalent dose were calculated. Results: Smaller cell survival fitting errors were observed using DLQ, with the exception of melanoma, which had a low α/β = 0.16 in SLQ. Ordinary differential equation simulation indicated lower normal tissue biological equivalent dose to achieve the same tumor control with a hypofractionated approach for 4 cell lines for the DLQ model, in contrast to SLQ, which favored 2 Gy per fraction for all cells except melanoma. The DLQ model indicated greater tumor radioresistance than SLQ, but the radioresistance was overcome by hypofractionation, other than the GBM cells, which responded poorly to all fractionations. Conclusion: The distinct radiosensitivity and dynamics between CSCs and DCCs in radiation therapy response could perhaps be one possible explanation for the heterogeneous intertumor response to hypofractionation and in some cases superior outcome from

  15. Implications of incorporating N cycling and N limitations on primary production in an individual-based dynamic vegetation model

    Science.gov (United States)

    Smith, B.; Wårlind, D.; Arneth, A.; Hickler, T.; Leadley, P.; Siltberg, J.; Zaehle, S.

    2014-04-01

    The LPJ-GUESS dynamic vegetation model uniquely combines an individual- and patch-based representation of vegetation dynamics with ecosystem biogeochemical cycling from regional to global scales. We present an updated version that includes plant and soil N dynamics, analysing the implications of accounting for C-N interactions on predictions and performance of the model. Stand structural dynamics and allometric scaling of tree growth suggested by global databases of forest stand structure and development were well reproduced by the model in comparison to an earlier multi-model study. Accounting for N cycle dynamics improved the goodness of fit for broadleaved forests. N limitation associated with low N-mineralisation rates reduces productivity of cold-climate and dry-climate ecosystems relative to mesic temperate and tropical ecosystems. In a model experiment emulating free-air CO2 enrichment (FACE) treatment for forests globally, N limitation associated with low N-mineralisation rates of colder soils reduces CO2 enhancement of net primary production (NPP) for boreal forests, while some temperate and tropical forests exhibit increased NPP enhancement. Under a business-as-usual future climate and emissions scenario, ecosystem C storage globally was projected to increase by ca. 10%; additional N requirements to match this increasing ecosystem C were within the high N supply limit estimated on stoichiometric grounds in an earlier study. Our results highlight the importance of accounting for C-N interactions in studies of global terrestrial N cycling, and as a basis for understanding mechanisms on local scales and in different regional contexts.

  16. A new model to predict diffusive self-heating during composting incorporating the reaction engineering approach (REA) framework.

    Science.gov (United States)

    Putranto, Aditya; Chen, Xiao Dong

    2017-05-01

    During composting, self-heating may occur due to the exothermicities of the chemical and biological reactions. An accurate model for predicting maximum temperature is useful in predicting whether the phenomena would occur and to what extent it would have undergone. Elevated temperatures would lead to undesirable situations such as the release of large amount of toxic gases or sometimes would even lead to spontaneous combustion. In this paper, we report a new model for predicting the profiles of temperature, concentration of oxygen, moisture content and concentration of water vapor during composting. The model, which consists of a set of equations of conservation of heat and mass transfer as well as biological heating term, employs the reaction engineering approach (REA) framework to describe the local evaporation/condensation rate quantitatively. A good agreement between the predicted and experimental data of temperature during composting of sewage sludge is observed. The modeling indicates that the maximum temperature is achieved after some 46weeks of composting. Following this period, the temperature decreases in line with a significant decrease in moisture content and a tremendous increase in concentration of water vapor, indicating the massive cooling effect due to water evaporation. The spatial profiles indicate that the maximum temperature is approximately located at the middle-bottom of the compost piles. Towards the upper surface of the piles, the moisture content and concentration of water vapor decreases due to the moisture transfer to the surrounding. The newly proposed model can be used as reliable simulation tool to explore several geometry configurations and operating conditions for avoiding elevated temperature build-up and self-heating during industrial composting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Incorporating Experience Curves in Appliance Standards Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery; Kantner, Colleen; Lekov, Alex; Meyers, Stephen; Rosenquist, Gregory; Buskirk, Robert Van; Yang, Hung-Chia; Desroches, Louis-Benoit

    2011-10-31

    The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.

  18. Radiation fields, dosimetry, biokinetics and biophysical models for cancer induction by ionising radiation 1996-1999. Biokinetics and dosimetry of incorporated radionuclides. Final report

    International Nuclear Information System (INIS)

    Roth, P.; Aubineau-Laniece, I.; Bailly-Despiney, I.

    2000-01-01

    The final report 'Biokinetics and Dosimetry of Incorporated Radionuclides' presented here is one part of the 5 individual reports. The work to be carried out within this project is structured into four Work Packages: Workpackage 1 concentrates on ingested radionuclides, considering doses to the GI tract and radionuclide absorption. A major objective is the development of a new dosimetric model of the GI tract, taking account of most recent data on gut transit and dose to sensitive cells. Workpackage 2 seeks to improve and extend biokinetic and dosimetric models for systemic radionuclides. Existing models for adults and children will be extended to other elements and new models will be developed for the embryo and fetus. Workpackage 3 is to improve assessment of localised distribution of dose within tissues at the cellular level for specific examples of Auger emitters and alpha emitting isotopes, in relation to observed effects. The work will include experimental studies of dose/effect relationship and the development of localisation methods. Workpackage 4 concerns the development of computer codes for the new dosimetric models, quality assurance of the models and the calculation of dose coefficients. Formal sensitivity analysis will be used to identify critical areas of model development and to investigate the effects of variability and incertainty in biokinetic parameters. (orig.)

  19. Modelling the role of fires in the terrestrial carbon balance by incorporating SPITFIRE into the global vegetation modelORCHIDEE - Part 1: Simulating historical global burned area and fire regimes

    Science.gov (United States)

    C. Yue; P. Ciais; P. Cadule; K. Thonicke; S. Archibald; B. Poulter; W. M. Hao; S. Hantson; F. Mouillot; P. Friedlingstein; F. Maignan; N. Viovy

    2014-01-01

    Fire is an important global ecological process that influences the distribution of biomes, with consequences for carbon, water, and energy budgets. Therefore it is impossible to appropriately model the history and future of the terrestrial ecosystems and the climate system without including fire. This study incorporates the process-based prognostic fire module SPITFIRE...

  20. A deterministic model for deteriorating items with displayed inventory level dependent demand rate incorporating marketing decisions with transportation cost

    Directory of Open Access Journals (Sweden)

    A. K. Bhunia

    2011-01-01

    Full Text Available This paper deals with an inventory model, which considers the impact of marketing strategies such as pricing and advertising as well as the displayed inventory level on the demand rate of the system. In addition, the demand rate during the stock-out period differs from that during the stock-in period by a function varied on the waiting time up to the beginning of the next cycle. Shortage are allowed and partially backlogged. Here, the deterioration rate is assumed to follow the Weibull distribution. Considering all these factors with others, different scenarios of the system are investigated. To obtain the solutions of these cases and to illustrate the model, an example is considered. Finally, to study the effects of changes of different parameters of the system, sensitivity analyses have been carried out with respect to the different parameters of the system.

  1. Development of an atmospheric diffusion numerical model for a nuclear facility. Numerical calculation method incorporating building effects

    International Nuclear Information System (INIS)

    Sada, Koichi; Michioka, Takenobu; Ichikawa, Yoichi

    2002-01-01

    Because effluent gas is sometimes released from low positions, viz., near the ground surface and around buildings, the effects caused by buildings within the site area are not negligible for gas diffusion predictions. For these reasons, the effects caused by buildings for gas diffusion are considered under the terrain following calculation coordinate system in this report. Numerical calculation meshes on the ground surface are treated as the building with the adaptation of wall function techniques of turbulent quantities in the flow calculations using a turbulence closure model. The reflection conditions of released particles on building surfaces are taken into consideration in the diffusion calculation using the Lagrangian particle model. Obtained flow and diffusion calculation results are compared with those of wind tunnel experiments around the building. It was apparent that features observed in a wind tunnel, viz., the formation of cavity regions behind the building and the gas diffusion to the ground surface behind the building, are also obtained by numerical calculation. (author)

  2. Incorporating anthropogenic influences into fire probability models: Effects of development and climate change on fire activity in California

    Science.gov (United States)

    Mann, M.; Moritz, M.; Batllori, E.; Waller, E.; Krawchuk, M.; Berck, P.

    2014-12-01

    The costly interactions between humans and natural fire regimes throughout California demonstrate the need to understand the uncertainties surrounding wildfire, especially in the face of a changing climate and expanding human communities. Although a number of statistical and process-based wildfire models exist for California, there is enormous uncertainty about the location and number of future fires. Models estimate an increase in fire occurrence between nine and fifty-three percent by the end of the century. Our goal is to assess the role of uncertainty in climate and anthropogenic influences on the state's fire regime from 2000-2050. We develop an empirical model that integrates novel information about the distribution and characteristics of future plant communities without assuming a particular distribution, and improve on previous efforts by integrating dynamic estimates of population density at each forecast time step. Historically, we find that anthropogenic influences account for up to fifty percent of the total fire count, and that further housing development will incite or suppress additional fires according to their intensity. We also find that the total area burned is likely to increase but at a slower than historical rate. Previous findings of substantially increased numbers of fires may be tied to the assumption of static fuel loadings, and the use of proxy variables not relevant to plant community distributions. We also find considerable agreement between GFDL and PCM model A2 runs, with decreasing fire counts expected only in areas of coastal influence below San Francisco and above Los Angeles. Due to potential shifts in rainfall patterns, substantial uncertainty remains for the semiarid deserts of the inland south. The broad shifts of wildfire between California's climatic regions forecast in this study point to dramatic shifts in the pressures plant and human communities will face by midcentury. The information provided by this study reduces the

  3. Predicting 30-Day Readmissions in an Asian Population: Building a Predictive Model by Incorporating Markers of Hospitalization Severity.

    Directory of Open Access Journals (Sweden)

    Lian Leng Low

    Full Text Available To reduce readmissions, it may be cost-effective to consider risk stratification, with targeting intervention programs to patients at high risk of readmissions. In this study, we aimed to derive and validate a prediction model including several novel markers of hospitalization severity, and compare the model with the LACE index (Length of stay, Acuity of admission, Charlson comorbidity index, Emergency department visits in past 6 months, an established risk stratification tool.This was a retrospective cohort study of all patients ≥ 21 years of age, who were admitted to a tertiary hospital in Singapore from January 1, 2013 through May 31, 2015. Data were extracted from the hospital's electronic health records. The outcome was defined as unplanned readmissions within 30 days of discharge from the index hospitalization. Candidate predictive variables were broadly grouped into five categories: Patient demographics, social determinants of health, past healthcare utilization, medical comorbidities, and markers of hospitalization severity. Multivariable logistic regression was used to predict the outcome, and receiver operating characteristic analysis was performed to compare our model with the LACE index.74,102 cases were enrolled for analysis. Of these, 11,492 patient cases (15.5% were readmitted within 30 days of discharge. A total of fifteen predictive variables were strongly associated with the risk of 30-day readmissions, including number of emergency department visits in the past 6 months, Charlson Comorbidity Index, markers of hospitalization severity such as 'requiring inpatient dialysis during index admission, and 'treatment with intravenous furosemide 40 milligrams or more' during index admission. Our predictive model outperformed the LACE index by achieving larger area under the curve values: 0.78 (95% confidence interval [CI]: 0.77-0.79 versus 0.70 (95% CI: 0.69-0.71.Several factors are important for the risk of 30-day readmissions

  4. Genome-wide prediction models that incorporate de novo GWAS are a powerful new tool for tropical rice improvement

    Science.gov (United States)

    Spindel, J E; Begum, H; Akdemir, D; Collard, B; Redoña, E; Jannink, J-L; McCouch, S

    2016-01-01

    To address the multiple challenges to food security posed by global climate change, population growth and rising incomes, plant breeders are developing new crop varieties that can enhance both agricultural productivity and environmental sustainability. Current breeding practices, however, are unable to keep pace with demand. Genomic selection (GS) is a new technique that helps accelerate the rate of genetic gain in breeding by using whole-genome data to predict the breeding value of offspring. Here, we describe a new GS model that combines RR-BLUP with markers fit as fixed effects selected from the results of a genome-wide-association study (GWAS) on the RR-BLUP training data. We term this model GS + de novo GWAS. In a breeding population of tropical rice, GS + de novo GWAS outperformed six other models for a variety of traits and in multiple environments. On the basis of these results, we propose an extended, two-part breeding design that can be used to efficiently integrate novel variation into elite breeding populations, thus expanding genetic diversity and enhancing the potential for sustainable productivity gains. PMID:26860200

  5. Incorporating higher order WINKLER springs with 3-D finite element model of a reactor building for seismic SSI analysis

    International Nuclear Information System (INIS)

    Ermutlu, H.E.

    1993-01-01

    In order to fulfill the seismic safety requirements, in the frame of seismic requalification activities for NPP Muehleberg, Switzerland, detailed seismic analysis performed on the Reactor Building and the results are presented previously. The primary objective of the present investigation is to assess the seismic safety of the reinforced concrete structures of reactor building. To achieve this objective requires a rather detailed 3-D finite element modeling for the outer shell structures, the drywell, the reactor pools, the floor decks and finally, the basemat. This already is a complicated task, which enforces need for simplifications in modelling the reactor internals and the foundation soil. Accordingly, all internal parts are modelled by vertical sticks and the Soil Structure Interaction (SSI) effects are represented by sets of transitional and higher order rotational WINKLER springs, i.e. avoiding complicated finite element SSI analysis. As a matter of fact, the availability of the results of recent investigations carried out on the reactor building using diversive finite element SSI analysis methods allow to calibrate the WINKLER springs, ensuring that the overall SSI behaviour of the reactor building is maintained

  6. Incorporating Ecosystem Experiments and Observations into Process Models of Forest Carbon and Water Cycles: Challenges and Solutions

    Science.gov (United States)

    Ward, E. J.; Thomas, R. Q.; Sun, G.; McNulty, S. G.; Domec, J. C.; Noormets, A.; King, J. S.

    2015-12-01

    Numerous studies, both experimental and observational, have been conducted over the past two decades in an attempt to understand how water and carbon cycling in terrestrial ecosystems may respond to changes in climatic conditions. These studies have produced a wealth of detailed data on key processes driving these cycles. In parallel, sophisticated models of these processes have been formulated to answer a variety of questions relevant to natural resource management. Recent advances in data assimilation techniques offer exciting new possibilities to combine this wealth of ecosystem data with process models of ecosystem function to improve prediction and quantify associated uncertainty. Using forests of the southeastern United States as our focus, we will specify how fine-scale physiological (e.g. half-hourly sap flux) can be scaled up with quantified error for use in models of stand growth and hydrology. This approach represents an opportunity to leverage current and past research from experiments including throughfall displacement × fertilization (PINEMAP), irrigation × fertilization (SETRES), elevated CO­2­ (Duke and ORNL FACE) and a variety of observational studies in both conifer and hardwood forests throughout the region, using a common platform for data assimilation and prediction. As part of this discussion, we will address variation in dominant species, stand structure, site age, management practices, soils and climate that represent both challenges to the development of a common analytical approach and opportunities to address questions of interest to policy makers and natural resource managers.

  7. Effect of Load Model Using Ranking Identification Technique for Multi Type DG Incorporating Embedded Meta EP-Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Abdul Rahim Siti Rafidah

    2018-01-01

    Full Text Available This paper presents the effect of load model prior to the distributed generation (DG planning in distribution system. In achieving optimal allocation and placement of DG, a ranking identification technique was proposed in order to study the DG planning using pre-developed Embedded Meta Evolutionary Programming–Firefly Algorithm. The aim of this study is to analyze the effect of different type of DG in order to reduce the total losses considering load factor. To realize the effectiveness of the proposed technique, the IEEE 33 bus test systems was utilized as the test specimen. In this study, the proposed techniques were used to determine the DG sizing and the suitable location for DG planning. The results produced are utilized for the optimization process of DG for the benefit of power system operators and planners in the utility. The power system planner can choose the suitable size and location from the result obtained in this study with the appropriate company’s budget. The modeling of voltage dependent loads has been presented and the results show the voltage dependent load models have a significant effect on total losses of a distribution system for different DG type.

  8. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    Science.gov (United States)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  9. Modelling of the interaction between chemical and mechanical behaviour of ion exchange resins incorporated into a cement-based matrix

    Directory of Open Access Journals (Sweden)

    Le Bescop P.

    2013-07-01

    Full Text Available In this paper, we present a predictive model, based on experimental data, to determine the macroscopic mechanical behavior of a material made up of ion exchange resins solidified into a CEM III cement paste. Some observations have shown that in some cases, a significant macroscopic expansion of this composite material may be expected, due to internal pressures generated in the resin. To build the model, we made the choice to break down the problem in two scale’s studies. The first deals with the mechanical behavior of the different heterogeneities of the composite, i.e. the resin and the cement paste. The second upscales the information from the heterogeneities to the Representative Elementary Volume (REV of the composite. The heterogeneities effects are taken into account in the REV by applying a homogenization method derived from the Eshelby theory combined with an interaction coefficient drawn from the poroelasticity theory. At the first scale, from the second thermodynamic law, a formulation is developed to estimate the resin microscopic swelling. The model response is illustrated on a simple example showing the impact of the calculated internal pressure, on the macroscopic strain.

  10. Incorporating positive body image into the treatment of eating disorders: A model for attunement and mindful self-care.

    Science.gov (United States)

    Cook-Cottone, Catherine P

    2015-06-01

    This article provides a model for understanding the role positive body image can play in the treatment of eating disorders and methods for guiding patients away from symptoms and toward flourishing. The Attuned Representational Model of Self (Cook-Cottone, 2006) and a conceptual model detailing flourishing in the context of body image and eating behavior (Cook-Cottone et al., 2013) are discussed. The flourishing inherent in positive body image comes hand-in-hand with two critical ways of being: (a) having healthy, embodied awareness of the internal and external aspects of self (i.e., attunement) and (b) engaging in mindful self-care. Attunement and mindful self-care thus are considered as potential targets of actionable therapeutic work in the cultivation of positive body image among those with disordered eating. For context, best-practices in eating disorder treatment are also reviewed. Limitations in current research are detailed and directions for future research are explicated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Mathematical modelling and optimization of a large-scale combined cooling, heat, and power system that incorporates unit changeover and time-of-use electricity price

    International Nuclear Information System (INIS)

    Zhu, Qiannan; Luo, Xianglong; Zhang, Bingjian; Chen, Ying

    2017-01-01

    Highlights: • We propose a novel superstructure for the design and optimization of LSCCHP. • A multi-objective multi-period MINLP model is formulated. • The unit start-up cost and time-of-use electricity prices are involved. • Unit size discretization strategy is proposed to linearize the original MINLP model. • A case study is elaborated to demonstrate the effectiveness of the proposed method. - Abstract: Building energy systems, particularly large public ones, are major energy consumers and pollutant emission contributors. In this study, a superstructure of large-scale combined cooling, heat, and power system is constructed. The off-design unit, economic cost, and CO_2 emission models are also formulated. Moreover, a multi-objective mixed integer nonlinear programming model is formulated for the simultaneous system synthesis, technology selection, unit sizing, and operation optimization of large-scale combined cooling, heat, and power system. Time-of-use electricity price and unit changeover cost are incorporated into the problem model. The economic objective is to minimize the total annual cost, which comprises the operation and investment costs of large-scale combined cooling, heat, and power system. The environmental objective is to minimize the annual global CO_2 emission of large-scale combined cooling, heat, and power system. The augmented ε–constraint method is applied to achieve the Pareto frontier of the design configuration, thereby reflecting the set of solutions that represent optimal trade-offs between the economic and environmental objectives. Sensitivity analysis is conducted to reflect the impact of natural gas price on the combined cooling, heat, and power system. The synthesis and design of combined cooling, heat, and power system for an airport in China is studied to test the proposed synthesis and design methodology. The Pareto curve of multi-objective optimization shows that the total annual cost varies from 102.53 to 94.59 M

  12. Novel analytical model for optimizing the pull-in voltage in a flexured MEMS switch incorporating beam perforation effect

    Science.gov (United States)

    Guha, K.; Laskar, N. M.; Gogoi, H. J.; Borah, A. K.; Baishnab, K. L.; Baishya, S.

    2017-11-01

    This paper presents a new method for the design, modelling and optimization of a uniform serpentine meander based MEMS shunt capacitive switch with perforation on upper beam. The new approach is proposed to improve the Pull-in Voltage performance in a MEMS switch. First a new analytical model of the Pull-in Voltage is proposed using the modified Mejis-Fokkema capacitance model taking care of the nonlinear electrostatic force, the fringing field effect due to beam thickness and etched holes on the beam simultaneously followed by the validation of same with the simulated results of benchmark full 3D FEM solver CoventorWare in a wide range of structural parameter variations. It shows a good agreement with the simulated results. Secondly, an optimization method is presented to determine the optimum configuration of switch for achieving minimum Pull-in voltage considering the proposed analytical mode as objective function. Some high performance Evolutionary Optimization Algorithms have been utilized to obtain the optimum dimensions with less computational cost and complexity. Upon comparing the applied algorithms between each other, the Dragonfly Algorithm is found to be most suitable in terms of minimum Pull-in voltage and higher convergence speed. Optimized values are validated against the simulated results of CoventorWare which shows a very satisfactory results with a small deviation of 0.223 V. In addition to these, the paper proposes, for the first time, a novel algorithmic approach for uniform arrangement of square holes in a given beam area of RF MEMS switch for perforation. The algorithm dynamically accommodates all the square holes within a given beam area such that the maximum space is utilized. This automated arrangement of perforation holes will further improve the computational complexity and design accuracy of the complex design of perforated MEMS switch.

  13. Where is the competitive advantage going?: a management model that incorporates people as a key element of the business strategy

    Directory of Open Access Journals (Sweden)

    Emilio García Vega

    2015-09-01

    Full Text Available Competitive advantage is a concept that has evolved in an accelerated way during the last few years. Some scholars and executives claim that people are a fundamental element of its construction. In this line, business management has shown an inclination towards the human resources management – also called “talents” – as the key element of its organizational success. In this journey, the ideas, paradigms and conceptions have been modified in an interesting way. This paper tries to propose these new conceptions facing the organization management challenge, and proposes a management model based on the importance of the people in the competitive advantage administration.

  14. A hybrid health service accreditation program model incorporating mandated standards and continuous improvement: interview study of multiple stakeholders in Australian health care.

    Science.gov (United States)

    Greenfield, David; Hinchcliff, Reece; Hogden, Anne; Mumford, Virginia; Debono, Deborah; Pawsey, Marjorie; Westbrook, Johanna; Braithwaite, Jeffrey

    2016-07-01

    The study aim was to investigate the understandings and concerns of stakeholders regarding the evolution of health service accreditation programs in Australia. Stakeholder representatives from programs in the primary, acute and aged care sectors participated in semi-structured interviews. Across 2011-12 there were 47 group and individual interviews involving 258 participants. Interviews lasted, on average, 1 h, and were digitally recorded and transcribed. Transcriptions were analysed using textual referencing software. Four significant issues were considered to have directed the evolution of accreditation programs: altering underlying program philosophies; shifting of program content focus and details; different surveying expectations and experiences and the influence of external contextual factors upon accreditation programs. Three accreditation program models were noted by participants: regulatory compliance; continuous quality improvement and a hybrid model, incorporating elements of these two. Respondents noted the compatibility or incommensurability of the first two models. Participation in a program was reportedly experienced as ranging on a survey continuum from "malicious compliance" to "performance audits" to "quality improvement journeys". Wider contextual factors, in particular, political and community expectations, and associated media reporting, were considered significant influences on the operation and evolution of programs. A hybrid accreditation model was noted to have evolved. The hybrid model promotes minimum standards and continuous quality improvement, through examining the structure and processes of organisations and the outcomes of care. The hybrid model appears to be directing organisational and professional attention to enhance their safety cultures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Predictive Treatment Management: Incorporating a Predictive Tumor Response Model Into Robust Prospective Treatment Planning for Non-Small Cell Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Pengpeng, E-mail: zhangp@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Yorke, Ellen; Hu, Yu-Chi; Mageras, Gig [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Rimner, Andreas [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States)

    2014-02-01

    Purpose: We hypothesized that a treatment planning technique that incorporates predicted lung tumor regression into optimization, predictive treatment planning (PTP), could allow dose escalation to the residual tumor while maintaining coverage of the initial target without increasing dose to surrounding organs at risk (OARs). Methods and Materials: We created a model to estimate the geometric presence of residual tumors after radiation therapy using planning computed tomography (CT) and weekly cone beam CT scans of 5 lung cancer patients. For planning purposes, we modeled the dynamic process of tumor shrinkage by morphing the original planning target volume (PTV{sub orig}) in 3 equispaced steps to the predicted residue (PTV{sub pred}). Patients were treated with a uniform prescription dose to PTV{sub orig}. By contrast, PTP optimization started with the same prescription dose to PTV{sub orig} but linearly increased the dose at each step, until reaching the highest dose achievable to PTV{sub pred} consistent with OAR limits. This method is compared with midcourse adaptive replanning. Results: Initial parenchymal gross tumor volume (GTV) ranged from 3.6 to 186.5 cm{sup 3}. On average, the primary GTV and PTV decreased by 39% and 27%, respectively, at the end of treatment. The PTP approach gave PTV{sub orig} at least the prescription dose, and it increased the mean dose of the true residual tumor by an average of 6.0 Gy above the adaptive approach. Conclusions: PTP, incorporating a tumor regression model from the start, represents a new approach to increase tumor dose without increasing toxicities, and reduce clinical workload compared with the adaptive approach, although model verification using per-patient midcourse imaging would be prudent.

  16. Seasonal PCB bioaccumulation in an arctic marine ecosystem: a model analysis incorporating lipid dynamics, food-web productivity and migration.