WorldWideScience

Sample records for model include normal

  1. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    International Nuclear Information System (INIS)

    Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.

    2012-01-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011–0.013) clinical factor was “previous abdominal surgery.” As second significant (p = 0.012–0.016) factor, “cardiac history” was included in all three rectal bleeding fits, whereas including “diabetes” was significant (p = 0.039–0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003–0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D 50 . Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions

  2. Relativistic calculation of nuclear magnetic shielding tensor using the regular approximation to the normalized elimination of the small component. III. Introduction of gauge-including atomic orbitals and a finite-size nuclear model.

    Science.gov (United States)

    Hamaya, S; Maeda, H; Funaki, M; Fukui, H

    2008-12-14

    The relativistic calculation of nuclear magnetic shielding tensors in hydrogen halides is performed using the second-order regular approximation to the normalized elimination of the small component (SORA-NESC) method with the inclusion of the perturbation terms from the metric operator. This computational scheme is denoted as SORA-Met. The SORA-Met calculation yields anisotropies, Delta sigma = sigma(parallel) - sigma(perpendicular), for the halogen nuclei in hydrogen halides that are too small. In the NESC theory, the small component of the spinor is combined to the large component via the operator sigma x piU/2c, in which pi = p + A, U is a nonunitary transformation operator, and c approximately = 137.036 a.u. is the velocity of light. The operator U depends on the vector potential A (i.e., the magnetic perturbations in the system) with the leading order c(-2) and the magnetic perturbation terms of U contribute to the Hamiltonian and metric operators of the system in the leading order c(-4). It is shown that the small Delta sigma for halogen nuclei found in our previous studies is related to the neglect of the U(0,1) perturbation operator of U, which is independent of the external magnetic field and of the first order with respect to the nuclear magnetic dipole moment. Introduction of gauge-including atomic orbitals and a finite-size nuclear model is also discussed.

  3. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  4. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  5. A Skew-Normal Mixture Regression Model

    Science.gov (United States)

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  6. Cylindrical shell under impact load including transverse shear and normal stress

    International Nuclear Information System (INIS)

    Shakeri, M.; Eslami, M.R.; Ghassaa, M.; Ohadi, A.R.

    1993-01-01

    The general governing equations of shell of revolution under shock loads are reduced to equations describing the elastic behavior of cylindrical shell under axisymmetric impact load. The effect of lateral normal stress, transverse shear, and rotary inertia are included, and the equations are solved by Galerkin finite element method. The results are compared with the previous works of authors. (author)

  7. SEEPAGE MODEL FOR PA INCLUDING DRIFT COLLAPSE

    International Nuclear Information System (INIS)

    C. Tsang

    2004-01-01

    The purpose of this report is to document the predictions and analyses performed using the seepage model for performance assessment (SMPA) for both the Topopah Spring middle nonlithophysal (Tptpmn) and lower lithophysal (Tptpll) lithostratigraphic units at Yucca Mountain, Nevada. Look-up tables of seepage flow rates into a drift (and their uncertainty) are generated by performing numerical simulations with the seepage model for many combinations of the three most important seepage-relevant parameters: the fracture permeability, the capillary-strength parameter 1/a, and the percolation flux. The percolation flux values chosen take into account flow focusing effects, which are evaluated based on a flow-focusing model. Moreover, multiple realizations of the underlying stochastic permeability field are conducted. Selected sensitivity studies are performed, including the effects of an alternative drift geometry representing a partially collapsed drift from an independent drift-degradation analysis (BSC 2004 [DIRS 166107]). The intended purpose of the seepage model is to provide results of drift-scale seepage rates under a series of parameters and scenarios in support of the Total System Performance Assessment for License Application (TSPA-LA). The SMPA is intended for the evaluation of drift-scale seepage rates under the full range of parameter values for three parameters found to be key (fracture permeability, the van Genuchten 1/a parameter, and percolation flux) and drift degradation shape scenarios in support of the TSPA-LA during the period of compliance for postclosure performance [Technical Work Plan for: Performance Assessment Unsaturated Zone (BSC 2002 [DIRS 160819], Section I-4-2-1)]. The flow-focusing model in the Topopah Spring welded (TSw) unit is intended to provide an estimate of flow focusing factors (FFFs) that (1) bridge the gap between the mountain-scale and drift-scale models, and (2) account for variability in local percolation flux due to

  8. Grand unified models including extra Z bosons

    International Nuclear Information System (INIS)

    Li Tiezhong

    1989-01-01

    The grand unified theories (GUT) of the simple Lie groups including extra Z bosons are discussed. Under authors's hypothesis there are only SU 5+m SO 6+4n and E 6 groups. The general discussion of SU 5+m is given, then the SU 6 and SU 7 are considered. In SU 6 the 15+6 * +6 * fermion representations are used, which are not same as others in fermion content, Yukawa coupling and broken scales. A conception of clans of particles, which are not families, is suggested. These clans consist of extra Z bosons and the corresponding fermions of the scale. The all of fermions in the clans are down quarks except for the standard model which consists of Z bosons and 15 fermions, therefore, the spectrum of the hadrons which are composed of these down quarks are different from hadrons at present

  9. Normal Forms for Reduced Stochastic Climate Models

    Science.gov (United States)

    Franzke, C.; Majda, A.; Crommelin, D.

    2009-04-01

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOF) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It will be shown that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large-scales by the small scales and simultaneously strong cubic damping. This normal form should prove useful for developing systematic regression fitting strategies for stochastic models of climate data. The validity of the one and two dimensional normal forms will be presented. Also the analytical PDF form for one-dimensional reduced models will be derived. This PDF can exhibit power-law decay only over a limited range and its ultimate decay is determined by the cubic damping. This cubic damping produces a Gaussian tail.

  10. An Integrated Biochemistry Laboratory, Including Molecular Modeling

    Science.gov (United States)

    Hall, Adele J. Wolfson Mona L.; Branham, Thomas R.

    1996-11-01

    ) experience with methods of protein purification; (iii) incorporation of appropriate controls into experiments; (iv) use of basic statistics in data analysis; (v) writing papers and grant proposals in accepted scientific style; (vi) peer review; (vii) oral presentation of results and proposals; and (viii) introduction to molecular modeling. Figure 1 illustrates the modular nature of the lab curriculum. Elements from each of the exercises can be separated and treated as stand-alone exercises, or combined into short or long projects. We have been able to offer the opportunity to use sophisticated molecular modeling in the final module through funding from an NSF-ILI grant. However, many of the benefits of the research proposal can be achieved with other computer programs, or even by literature survey alone. Figure 1.Design of project-based biochemistry laboratory. Modules (projects, or portions of projects) are indicated as boxes. Each of these can be treated independently, or used as part of a larger project. Solid lines indicate some suggested paths from one module to the next. The skills and knowledge required for protein purification and design are developed in three units: (i) an introduction to critical assays needed to monitor degree of purification, including an evaluation of assay parameters; (ii) partial purification by ion-exchange techniques; and (iii) preparation of a grant proposal on protein design by mutagenesis. Brief descriptions of each of these units follow, with experimental details of each project at the end of this paper. Assays for Lysozyme Activity and Protein Concentration (4 weeks) The assays mastered during the first unit are a necessary tool for determining the purity of the enzyme during the second unit on purification by ion exchange. These assays allow an introduction to the concept of specific activity (units of enzyme activity per milligram of total protein) as a measure of purity. In this first sequence, students learn a turbidimetric assay

  11. Seepage Model for PA Including Drift Collapse

    International Nuclear Information System (INIS)

    Li, G.; Tsang, C.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M andO 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M andO 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in niches and in the cross drift to

  12. Seepage Model for PA Including Dift Collapse

    Energy Technology Data Exchange (ETDEWEB)

    G. Li; C. Tsang

    2000-12-20

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M&O 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M&O 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in

  13. Enhanced battery model including temperature effects

    NARCIS (Netherlands)

    Rosca, B.; Wilkins, S.

    2013-01-01

    Within electric and hybrid vehicles, batteries are used to provide/buffer the energy required for driving. However, battery performance varies throughout the temperature range specific to automotive applications, and as such, models that describe this behaviour are required. This paper presents a

  14. Modeling heart rate variability including the effect of sleep stages

    Science.gov (United States)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  15. Modeling and simulation of normal and hemiparetic gait

    Science.gov (United States)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  16. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  17. Normalization.

    Science.gov (United States)

    Cuevas, Eduardo J.

    1997-01-01

    Discusses cornerstone of Montessori theory, normalization, which asserts that if a child is placed in an optimum prepared environment where inner impulses match external opportunities, the undeviated self emerges, a being totally in harmony with its surroundings. Makes distinctions regarding normalization, normalized, and normality, indicating how…

  18. Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling

    International Nuclear Information System (INIS)

    Karvonen, T.

    2013-11-01

    Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from

  19. Dynamic hysteresis modeling including skin effect using diffusion equation model

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Souad, E-mail: souadhamada@yahoo.fr [LSP-IE: Research Laboratory, Electrical Engineering Department, University of Batna, 05000 Batna (Algeria); Louai, Fatima Zohra, E-mail: fz_louai@yahoo.com [LSP-IE: Research Laboratory, Electrical Engineering Department, University of Batna, 05000 Batna (Algeria); Nait-Said, Nasreddine, E-mail: n_naitsaid@yahoo.com [LSP-IE: Research Laboratory, Electrical Engineering Department, University of Batna, 05000 Batna (Algeria); Benabou, Abdelkader, E-mail: Abdelkader.Benabou@univ-lille1.fr [L2EP, Université de Lille1, 59655 Villeneuve d’Ascq (France)

    2016-07-15

    An improved dynamic hysteresis model is proposed for the prediction of hysteresis loop of electrical steel up to mean frequencies, taking into account the skin effect. In previous works, the analytical solution of the diffusion equation for low frequency (DELF) was coupled with the inverse static Jiles-Atherton (JA) model in order to represent the hysteresis behavior for a lamination. In the present paper, this approach is improved to ensure the reproducibility of measured hysteresis loops at mean frequency. The results of simulation are compared with the experimental ones. The selected results for frequencies 50 Hz, 100 Hz, 200 Hz and 400 Hz are presented and discussed.

  20. Stochastic modelling of two-phase flows including phase change

    International Nuclear Information System (INIS)

    Hurisse, O.; Minier, J.P.

    2011-01-01

    Stochastic modelling has already been developed and applied for single-phase flows and incompressible two-phase flows. In this article, we propose an extension of this modelling approach to two-phase flows including phase change (e.g. for steam-water flows). Two aspects are emphasised: a stochastic model accounting for phase transition and a modelling constraint which arises from volume conservation. To illustrate the whole approach, some remarks are eventually proposed for two-fluid models. (authors)

  1. Modeling Electric Double-Layers Including Chemical Reaction Effects

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    2014-01-01

    A physicochemical and numerical model for the transient formation of an electric double-layer between an electrolyte and a chemically-active flat surface is presented, based on a finite elements integration of the nonlinear Nernst-Planck-Poisson model including chemical reactions. The model works...

  2. Normal compliance contact models with finite interpenetration

    Czech Academy of Sciences Publication Activity Database

    Eck, Ch.; Jarušek, Jiří; Stará, J.

    2013-01-01

    Roč. 208, č. 1 (2013), s. 25-57 ISSN 0003-9527 R&D Projects: GA AV ČR IAA100750802; GA ČR(CZ) GAP201/12/0671 Institutional support: RVO:67985840 Keywords : compliance models * approximation Subject RIV: BA - General Mathematics Impact factor: 2.022, year: 2013 http://link.springer.com/article/10.1007%2Fs00205-012-0602-8#

  3. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars Peter

    While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order mom...... to a benchmark model in terms of dollar losses and the ability to explain the smirk in implied volatilities....

  4. ON SKEW-NORMAL MODEL FOR ECONOMICALLY ACTIVE POPULATION

    Directory of Open Access Journals (Sweden)

    OLOSUNDE AKINLOLU A

    2011-04-01

    Full Text Available The literature related to skew-symmetric distribution have grown rapidly in recent years but at the moment no publication on its applications concerning the description of economically active data with this type of probability models. In this paper, we provided an extension to this skew-normal distribution, which is also part of the family of skewed class of normal but with additional shape parameters δ. Some properties of this distribution are presented and finally, we considered fitting it to economically active population data. The model exhibited a better behaviour when compared to normal and skew normal distributions.

  5. Parton recombination model including resonance production. RL-78-040

    International Nuclear Information System (INIS)

    Roberts, R.G.; Hwa, R.C.; Matsuda, S.

    1978-05-01

    Possible effects of resonance production on the meson inclusive distribution in the fragmentation region are investigated in the framework of the parton recombination model. From a detailed study of the data on vector-meson production, a reliable ratio of the vector-to-pseudoscalar rates is determined. Then the influence of the decay of the vector mesons on the pseudoscalar spectrum is examined, and the effect found to be no more than 25% for x > 0.5. The normalization of the non-strange antiquark distributions are still higher than those in a quiescent proton. The agreement between the calculated results and data remain very good. 36 references

  6. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  7. Modified Normal Demand Distributions in (R,S)-Inventory Models

    NARCIS (Netherlands)

    Strijbosch, L.W.G.; Moors, J.J.A.

    2003-01-01

    To model demand, the normal distribution is by far the most popular; the disadvantage that it takes negative values is taken for granted.This paper proposes two modi.cations of the normal distribution, both taking non-negative values only.Safety factors and order-up-to-levels for the familiar (R,

  8. Progressive IRP Models for Power Resources Including EPP

    Directory of Open Access Journals (Sweden)

    Yiping Zhu

    2017-01-01

    Full Text Available In the view of optimizing regional power supply and demand, the paper makes effective planning scheduling of supply and demand side resources including energy efficiency power plant (EPP, to achieve the target of benefit, cost, and environmental constraints. In order to highlight the characteristics of different supply and demand resources in economic, environmental, and carbon constraints, three planning models with progressive constraints are constructed. Results of three models by the same example show that the best solutions to different models are different. The planning model including EPP has obvious advantages considering pollutant and carbon emission constraints, which confirms the advantages of low cost and emissions of EPP. The construction of progressive IRP models for power resources considering EPP has a certain reference value for guiding the planning and layout of EPP within other power resources and achieving cost and environmental objectives.

  9. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  10. A hydrodynamic model for granular material flows including segregation effects

    Directory of Open Access Journals (Sweden)

    Gilberg Dominik

    2017-01-01

    Full Text Available The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.

  11. A hydrodynamic model for granular material flows including segregation effects

    Science.gov (United States)

    Gilberg, Dominik; Klar, Axel; Steiner, Konrad

    2017-06-01

    The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.

  12. Simple suggestions for including vertical physics in oil spill models

    International Nuclear Information System (INIS)

    D'Asaro, Eric; University of Washington, Seatle, WA

    2001-01-01

    Current models of oil spills include no vertical physics. They neglect the effect of vertical water motions on the transport and concentration of floating oil. Some simple ways to introduce vertical physics are suggested here. The major suggestion is to routinely measure the density stratification of the upper ocean during oil spills in order to develop a database on the effect of stratification. (Author)

  13. Local stem cell depletion model for normal tissue damage

    International Nuclear Information System (INIS)

    Yaes, R.J.; Keland, A.

    1987-01-01

    The hypothesis that radiation causes normal tissue damage by completely depleting local regions of tissue of viable stem cells leads to a simple mathematical model for such damage. In organs like skin and spinal cord where destruction of a small volume of tissue leads to a clinically apparent complication, the complication probability is expressed as a function of dose, volume and stem cell number by a simple triple negative exponential function analogous to the double exponential function of Munro and Gilbert for tumor control. The steep dose response curves for radiation myelitis that are obtained with our model are compared with the experimental data for radiation myelitis in laboratory rats. The model can be generalized to include other types or organs, high LET radiation, fractionated courses of radiation, and cases where an organ with a heterogeneous stem cell population receives an inhomogeneous dose of radiation. In principle it would thus be possible to determine the probability of tumor control and of damage to any organ within the radiation field if the dose distribution in three dimensional space within a patient is known

  14. Modeling pore corrosion in normally open gold- plated copper connectors.

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.

  15. Normal Type-2 Fuzzy Geometric Curve Modeling: A Literature Review

    Science.gov (United States)

    Adesah, R. S.; Zakaria, R.

    2017-09-01

    Type-2 Fuzzy Set Theory (T2FST) is widely used for defining uncertainty data points rather than the traditional fuzzy set theory (type-1) since 2001. Recently, T2FST is used in many fields due to its ability to handle complex uncertainty data. In this paper, a review of normal type-2 fuzzy geometric curve modeling methods and techniques is presented. In particular, there have been recent applications of Normal Type-2 Fuzzy Set Theory (NT2FST) in geometric modeling, where it has helped improving results over type-1 fuzzy sets. In this paper, a concise and representative review of the processes in normal type-2 fuzzy geometrical curve modeling such as the fuzzification is presented.

  16. Exclusive queueing model including the choice of service windows

    Science.gov (United States)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  17. Modeling normal-tissue complication probabilities: where are we now?

    International Nuclear Information System (INIS)

    Bentzen, Soeren M.

    1995-01-01

    Radiation-induced normal-tissue morbidity is the ultimate limitation to the obtainable loco-regional tumor control in radiation oncology, and throughout the 100-year history of this specialty an improved understanding of the radiobiology of normal tissues has been a high priority. To this end mathematical modeling of normal-tissue complication probabilities (NTCP) has been an important tool. This lecture will present a status of four current research areas: 1) The use of the linear-quadratic (LQ) model for the description of dose fractionation and dose-rate effects. The value of the LQ model, at least in the range of dose per fraction from 1-5 Gy, is relatively well-established from clinical studies. Yet, as soon as we turn to prediction, the lack of α/β estimates for many clinical endpoints and the relatively poor precision of the available estimates pose serious limitations. 2) The volume effect for NTCP. Despite considerable modeling efforts, the current models seem to be much too crude when compared with our biological understanding of the clinical volume effect. 3) NTCP models of proliferative organization. Progress in cell kinetics and in the understanding of homeostatic feed-back mechanisms in normal tissues have renewed the interest in tissue structure models of the Wheldon-Michalowski-Kirk type. This may in turn improve our ability to predict latent-times of normal-tissue reactions and re-irradiation tolerance after long time intervals. 4) Patient-to-patient variability in the response to radiotherapy. The evidence is growing that a number of both extrinsic and intrinsic factors affect the NTCP of individual patients. This will shortly be reviewed. As our biological knowledge is rapidly growing in all of these areas, the current generation of models may have to be considerably refined

  18. Computer modeling the boron compound factor in normal brain tissue

    International Nuclear Information System (INIS)

    Gavin, P.R.; Huiskamp, R.; Wheeler, F.J.; Griebenow, M.L.

    1993-01-01

    The macroscopic distribution of borocaptate sodium (Na 2 B 12 H 11 SH or BSH) in normal tissues has been determined and can be accurately predicted from the blood concentration. The compound para-borono-phenylalanine (p-BPA) has also been studied in dogs and normal tissue distribution has been determined. The total physical dose required to reach a biological isoeffect appears to increase directly as the proportion of boron capture dose increases. This effect, together with knowledge of the macrodistribution, led to estimates of the influence of the microdistribution of the BSH compound. This paper reports a computer model that was used to predict the compound factor for BSH and p-BPA and, hence, the equivalent radiation in normal tissues. The compound factor would need to be calculated for other compounds with different distributions. This information is needed to design appropriate normal tissue tolerance studies for different organ systems and/or different boron compounds

  19. Modelling of tension stiffening for normal and high strength concrete

    DEFF Research Database (Denmark)

    Christiansen, Morten Bo; Nielsen, Mogens Peter

    1998-01-01

    form the model is extended to apply to biaxial stress fields as well. To determine the biaxial stress field, the theorem of minimum complementary elastic energy is used. The theory has been compared with tests on rods, disks, and beams of both normal and high strength concrete, and very good results...

  20. Modelling growth curves of Nigerian indigenous normal feather ...

    African Journals Online (AJOL)

    This study was conducted to predict the growth curve parameters using Bayesian Gompertz and logistic models and also to compare the two growth function in describing the body weight changes across age in Nigerian indigenous normal feather chicken. Each chick was wing-tagged at day old and body weights were ...

  1. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...

  2. Pyramidal Normalization Filter: Visual Model With Applications To Image Understanding

    Science.gov (United States)

    Schenker, P. S.; Unangst, D. R.; Knaak, T. F.; Huntley, D. T.; Patterson, W. R.

    1982-12-01

    This paper introduces a new nonlinear filter model which has applications in low-level machine vision. We show that this model, which we designate the normalization filter, is the basis for non-directional, multiple spatial frequency channel resolved detection of image edge structure. We show that the results obtained in this procedure are in close correspondence to the zero-crossing sets of the Marr-Hildreth edge detector.6 By comparison to their model, ours has the additional feature of constant-contrast thresholding, viz., it is spatially brightness adaptive. We describe a highly efficient and flexible realization of the normalization filter based on Burt's algorithm for pyramidal filtering.18 We present illustrative experimental results that we have obtained with a computer implementation of this filter design.

  3. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  4. Modeling of the Direct Current Generator Including the Magnetic Saturation and Temperature Effects

    Directory of Open Access Journals (Sweden)

    Alfonso J. Mercado-Samur

    2013-11-01

    Full Text Available In this paper the inclusion of temperature effect on the field resistance on the direct current generator model DC1A, which is valid to stability studies is proposed. First, the linear generator model is presented, after the effect of magnetic saturation and the change in the resistance value due to temperature produced by the field current are included. The comparison of experimental results and model simulations to validate the model is used. A direct current generator model which is a better representation of the generator is obtained. Visual comparison between simulations and experimental results shows the success of the proposed model, because it presents the lowest error of the compared models. The accuracy of the proposed model is observed via Modified Normalized Sum of Squared Errors index equal to 3.8979%.

  5. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  6. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time var....... Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%....... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  7. Population Synthesis Models for Normal Galaxies with Dusty Disks

    Directory of Open Access Journals (Sweden)

    Kyung-Won Suh

    2003-09-01

    Full Text Available To investigate the SEDs of galaxies considering the dust extinction processes in the galactic disks, we present the population synthesis models for normal galaxies with dusty disks. We use PEGASE (Fioc & Rocca-Volmerange 1997 to model them with standard input parameters for stars and new dust parameters. We find that the model results are strongly dependent on the dust parameters as well as other parameters (e.g. star formation history. We compare the model results with the observations and discuss about the possible explanations. We find that the dust opacity functions derived from studies of asymptotic giant branch stars are useful for modeling a galaxy with a dusty disk.

  8. Including spatial data in nutrient balance modelling on dairy farms

    Science.gov (United States)

    van Leeuwen, Maricke; van Middelaar, Corina; Stoof, Cathelijne; Oenema, Jouke; Stoorvogel, Jetse; de Boer, Imke

    2017-04-01

    The Annual Nutrient Cycle Assessment (ANCA) calculates the nitrogen (N) and phosphorus (P) balance at a dairy farm, while taking into account the subsequent nutrient cycles of the herd, manure, soil and crop components. Since January 2016, Dutch dairy farmers are required to use ANCA in order to increase understanding of nutrient flows and to minimize nutrient losses to the environment. A nutrient balance calculates the difference between nutrient inputs and outputs. Nutrients enter the farm via purchased feed, fertilizers, deposition and fixation by legumes (nitrogen), and leave the farm via milk, livestock, manure, and roughages. A positive balance indicates to which extent N and/or P are lost to the environment via gaseous emissions (N), leaching, run-off and accumulation in soil. A negative balance indicates that N and/or P are depleted from soil. ANCA was designed to calculate average nutrient flows on farm level (for the herd, manure, soil and crop components). ANCA was not designed to perform calculations of nutrient flows at the field level, as it uses averaged nutrient inputs and outputs across all fields, and it does not include field specific soil characteristics. Land management decisions, however, such as the level of N and P application, are typically taken at the field level given the specific crop and soil characteristics. Therefore the information that ANCA provides is likely not sufficient to support farmers' decisions on land management to minimize nutrient losses to the environment. This is particularly a problem when land management and soils vary between fields. For an accurate estimate of nutrient flows in a given farming system that can be used to optimize land management, the spatial scale of nutrient inputs and outputs (and thus the effect of land management and soil variation) could be essential. Our aim was to determine the effect of the spatial scale of nutrient inputs and outputs on modelled nutrient flows and nutrient use efficiencies

  9. Chemotherapy of Rodent Malaria. Evaluation of Drug Action against Normal and Resistant Strains including Exo-Erythrocytic Stages.

    Science.gov (United States)

    1979-10-01

    quinoline-methanols and allied compounds Quinine and a variety of drugs with comparable structures retain their activity against malaria parasites that...E. E., Warhurst, D. C. and Peters, W. (1975). The chemotherapy of rodent malaria , XXI. Action of quinine and WR 122,455 (a 9-phenanthrene methanol) on...AD-RI35 058 CHEMOTHERAPY’OF RODENT MALARIA EVALUATION OF DRUG I/i ACTION AGAINST NORMAL R-.(U) LIVERPOQL SCHOOL OF TROPICAL MEDICINE ENGLAND) DEPT OF

  10. Single-Phase Bundle Flows Including Macroscopic Turbulence Model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Jun; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Yoon, Seok Jong; Cho, Hyoung Kyu [Seoul National University, Seoul (Korea, Republic of)

    2016-05-15

    To deal with various thermal hydraulic phenomena due to rapid change of fluid properties when an accident happens, securing mechanistic approaches as much as possible may reduce the uncertainty arising from improper applications of the experimental models. In this study, the turbulence mixing model, which is well defined in the subchannel analysis code such as VIPRE, COBRA, and MATRA by experiments, is replaced by a macroscopic k-e turbulence model, which represents the aspect of mathematical derivation. The performance of CUPID with macroscopic turbulence model is validated against several bundle experiments: CNEN 4x4 and PNL 7x7 rod bundle tests. In this study, the macroscopic k-e model has been validated for the application to subchannel analysis. It has been implemented in the CUPID code and validated against CNEN 4x4 and PNL 7x7 rod bundle tests. The results showed that the macroscopic k-e turbulence model can estimate the experiments properly.

  11. Global atmospheric model for mercury including oxidation by bromine atoms

    Directory of Open Access Journals (Sweden)

    C. D. Holmes

    2010-12-01

    Full Text Available Global models of atmospheric mercury generally assume that gas-phase OH and ozone are the main oxidants converting Hg0 to HgII and thus driving mercury deposition to ecosystems. However, thermodynamic considerations argue against the importance of these reactions. We demonstrate here the viability of atomic bromine (Br as an alternative Hg0 oxidant. We conduct a global 3-D simulation with the GEOS-Chem model assuming gas-phase Br to be the sole Hg0 oxidant (Hg + Br model and compare to the previous version of the model with OH and ozone as the sole oxidants (Hg + OH/O3 model. We specify global 3-D Br concentration fields based on our best understanding of tropospheric and stratospheric Br chemistry. In both the Hg + Br and Hg + OH/O3 models, we add an aqueous photochemical reduction of HgII in cloud to impose a tropospheric lifetime for mercury of 6.5 months against deposition, as needed to reconcile observed total gaseous mercury (TGM concentrations with current estimates of anthropogenic emissions. This added reduction would not be necessary in the Hg + Br model if we adjusted the Br oxidation kinetics downward within their range of uncertainty. We find that the Hg + Br and Hg + OH/O3 models are equally capable of reproducing the spatial distribution of TGM and its seasonal cycle at northern mid-latitudes. The Hg + Br model shows a steeper decline of TGM concentrations from the tropics to southern mid-latitudes. Only the Hg + Br model can reproduce the springtime depletion and summer rebound of TGM observed at polar sites; the snowpack component of GEOS-Chem suggests that 40% of HgII deposited to snow in the Arctic is transferred to the ocean and land reservoirs, amounting to a net deposition flux to the Arctic of 60 Mg a−1. Summertime events of depleted Hg0 at Antarctic sites due to subsidence are much better simulated by

  12. Thematic report: Macroeconomic models including specifically social and environmental aspects

    OpenAIRE

    Kratena, Kurt

    2015-01-01

    WWWforEurope Deliverable No. 8, 30 pages A significant reduction of the global environmental consequences of European consumption and production activities are the main objective of the policy simulations carried out in this paper. For this purpose three different modelling approaches have been chosen. Two macroeconomic models following the philosophy of consistent stock-flow accounting for the main institutional sectors (households, firms, banks, central bank and government) are used for...

  13. Neurophysiological model of the normal and abnormal human pupil

    Science.gov (United States)

    Krenz, W.; Robin, M.; Barez, S.; Stark, L.

    1985-01-01

    Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.

  14. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  15. Delay Differential Equation Models of Normal and Diseased Electrocardiograms

    Science.gov (United States)

    Lainscsek, Claudia; Sejnowski, Terrence J.

    Time series analysis with nonlinear delay differential equations (DDEs) is a powerful tool since it reveals spectral as well as nonlinear properties of the underlying dynamical system. Here global DDE models are used to analyze electrocardiography recordings (ECGs) in order to capture distinguishing features for different heart conditions such as normal heart beat, congestive heart failure, and atrial fibrillation. To capture distinguishing features of the different data types the number of terms and delays in the model as well as the order of nonlinearity of the DDE model have to be selected. The DDE structure selection is done in a supervised way by selecting the DDE that best separates different data types. We analyzed 24 h of data from 15 young healthy subjects in normal sinus rhythm (NSR) of 15 congestive heart failure (CHF) patients as well as of 15 subjects suffering from atrial fibrillation (AF) selected from the Physionet database. For the analysis presented here we used 5 min non-overlapping data windows on the raw data without any artifact removal. For classification performance we used the Cohen Kappa coefficient computed directly from the confusion matrix. The overall classification performance of the three groups was around 72-99 % on the 5 min windows for the different approaches. For 2 h data windows the classification for all three groups was above 95%.

  16. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available The calculation of unsteady air loads is an essential step in any aeroelastic analysis. The subsonic doublet lattice method (DLM) is used extensively for this purpose due to its simplicity and reliability. The body models available with the popular...

  17. Normalization and Implementation of Three Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.

    2016-01-01

    Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  18. A thermal lens model including the Soret effect

    International Nuclear Information System (INIS)

    Cabrera, Humberto; Sira, Eloy; Rahn, Kareem; Garcia-Sucre, Maximo

    2009-01-01

    In this letter we generalize the thermal lens model to account for the Soret effect in binary liquid mixtures. This formalism permits the precise determination of the Soret coefficient in a steady-state situation. The theory is experimentally verified using the measured values in the ethanol/water mixtures. The time evolution of the Soret signal has been used to derive mass-diffusion times from which mass-diffusion coefficients were calculated. (Author)

  19. Including lateral interactions into microkinetic models of catalytic reactions

    DEFF Research Database (Denmark)

    Hellman, Anders; Honkala, Johanna Karoliina

    2007-01-01

    In many catalytic reactions lateral interactions between adsorbates are believed to have a strong influence on the reaction rates. We apply a microkinetic model to explore the effect of lateral interactions and how to efficiently take them into account in a simple catalytic reaction. Three differ...... different approximations are investigated: site, mean-field, and quasichemical approximations. The obtained results are compared to accurate Monte Carlo numbers. In the end, we apply the approximations to a real catalytic reaction, namely, ammonia synthesis....

  20. A stochastic model of gene expression including splicing events

    OpenAIRE

    Penim, Flávia Alexandra Mendes

    2014-01-01

    Tese de mestrado, Bioinformática e Biologia Computacional, Universidade de Lisboa, Faculdade de Ciências, 2014 Proteins carry out the great majority of the catalytic and structural work within an organism. The RNA templates used in their synthesis determines their identity, and this is dictated by which genes are transcribed. Therefore, gene expression is the fundamental determinant of an organism’s nature. The main objective of this thesis was to develop a stochastic computational model a...

  1. Extending PSA models including ageing and asset management - 15291

    International Nuclear Information System (INIS)

    Martorell, S.; Marton, I.; Carlos, S.; Sanchez, A.I.

    2015-01-01

    This paper proposes a new approach to Ageing Probabilistic Safety Assessment (APSA) modelling, which is intended to be used to support risk-informed decisions on the effectiveness of maintenance management programs and technical specification requirements of critical equipment of Nuclear Power Plants (NPP) within the framework of the Risk Informed Decision Making according to R.G. 1.174 principles. This approach focuses on the incorporation of not only equipment ageing but also effectiveness of maintenance and efficiency of surveillance testing explicitly into APSA models and data. This methodology is applied to a motor-operated valve of the auxiliary feed water system (AFWS) of a PWR. This simple example of application focuses on a critical safety-related equipment of a NPP in order to evaluate the risk impact of considering different approaches to APSA and the combined effect of equipment ageing and maintenance and testing alternatives along NPP design life. The risk impact of several alternatives in maintenance strategy is discussed

  2. Validation of normal and frictional contact models of spherical bodies by FEM analysis

    Directory of Open Access Journals (Sweden)

    H Khawaja

    2016-09-01

    Full Text Available Contact forces between two spheres are computed, including the contact pressure (normal and the frictional stress (tangential using a finite element method (FEM. A CAD model of a part of a sphere was developed. A mesh was created using ANSYS® Solid 186, 20-Noded hexahedral element and analyzed for its sensitivity. ANSYS® Contact 174 and Target 170, 8-Noded surface elements were used. Contact pressure and frictional stress contours were calculated by varying the displacements. Normal and Tangential contact forces were computed by integrating contact pressure and frictional stress over the contact surface. The values obtained for the normal force were compared with the non-linear spring model as given by Hertz [1]. Similarly values of the tangential force were compared with the model of Mindlin and Deresiewicz (MD [2]. The FEM results were found to be in agreement with the models.

  3. Normality of raw data in general linear models: The most widespread myth in statistics

    Science.gov (United States)

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  4. Sildenafil normalizes bowel transit in preclinical models of constipation.

    Directory of Open Access Journals (Sweden)

    Sarah K Sharman

    Full Text Available Guanylyl cyclase-C (GC-C agonists increase cGMP levels in the intestinal epithelium to promote secretion. This process underlies the utility of exogenous GC-C agonists such as linaclotide for the treatment of chronic idiopathic constipation (CIC and irritable bowel syndrome with constipation (IBS-C. Because GC-C agonists have limited use in pediatric patients, there is a need for alternative cGMP-elevating agents that are effective in the intestine. The present study aimed to determine whether the PDE-5 inhibitor sildenafil has similar effects as linaclotide on preclinical models of constipation. Oral administration of sildenafil caused increased cGMP levels in mouse intestinal epithelium demonstrating that blocking cGMP-breakdown is an alternative approach to increase cGMP in the gut. Both linaclotide and sildenafil reduced proliferation and increased differentiation in colon mucosa, indicating common target pathways. The homeostatic effects of cGMP required gut turnover since maximal effects were observed after 3 days of treatment. Neither linaclotide nor sildenafil treatment affected intestinal transit or water content of fecal pellets in healthy mice. To test the effectiveness of cGMP elevation in a functional motility disorder model, mice were treated with dextran sulfate sodium (DSS to induce colitis and were allowed to recover for several weeks. The recovered animals exhibited slower transit, but increased fecal water content. An acute dose of sildenafil was able to normalize transit and fecal water content in the DSS-recovery animal model, and also in loperamide-induced constipation. The higher fecal water content in the recovered animals was due to a compromised epithelial barrier, which was normalized by sildenafil treatment. Taken together our results show that sildenafil can have similar effects as linaclotide on the intestine, and may have therapeutic benefit to patients with CIC, IBS-C, and post-infectious IBS.

  5. Image Segmentation Using Disjunctive Normal Bayesian Shape and Appearance Models.

    Science.gov (United States)

    Mesadi, Fitsum; Erdil, Ertunc; Cetin, Mujdat; Tasdizen, Tolga

    2018-01-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. For instance, most active shape and appearance models require landmark points and assume unimodal shape and appearance distributions, and the level set representation does not support construction of local priors. In this paper, we present novel appearance and shape models for image segmentation based on a differentiable implicit parametric shape representation called a disjunctive normal shape model (DNSM). The DNSM is formed by the disjunction of polytopes, which themselves are formed by the conjunctions of half-spaces. The DNSM's parametric nature allows the use of powerful local prior statistics, and its implicit nature removes the need to use landmarks and easily handles topological changes. In a Bayesian inference framework, we model arbitrary shape and appearance distributions using nonparametric density estimations, at any local scale. The proposed local shape prior results in accurate segmentation even when very few training shapes are available, because the method generates a rich set of shape variations by locally combining training samples. We demonstrate the performance of the framework by applying it to both 2-D and 3-D data sets with emphasis on biomedical image segmentation applications.

  6. Options and pitfalls of normal tissues complication probability models

    International Nuclear Information System (INIS)

    Dorr, Wolfgang

    2011-01-01

    Full text: Technological improvements in the physical administration of radiotherapy have led to increasing conformation of the treatment volume (TV) with the planning target volume (PTV) and of the irradiated volume (IV) with the TV. In this process of improvement of the physical quality of radiotherapy, the total volumes of organs at risk exposed to significant doses have significantly decreased, resulting in increased inhomogeneities in the dose distributions within these organs. This has resulted in a need to identify and quantify volume effects in different normal tissues. Today, irradiated volume today must be considered a 6t h 'R' of radiotherapy, in addition to the 5 'Rs' defined by Withers and Steel in the mid/end 1980 s. The current status of knowledge of these volume effects has recently been summarized for many organs and tissues by the QUANTEC (Quantitative Analysis of Normal Tissue Effects in the Clinic) initiative [Int. J. Radiat. Oncol. BioI. Phys. 76 (3) Suppl., 2010]. However, the concept of using dose-volume histogram parameters as a basis for dose constraints, even without applying any models for normal tissue complication probabilities (NTCP), is based on (some) assumptions that are not met in clinical routine treatment planning. First, and most important, dose-volume histogram (DVH) parameters are usually derived from a single, 'snap-shot' CT-scan, without considering physiological (urinary bladder, intestine) or radiation induced (edema, patient weight loss) changes during radiotherapy. Also, individual variations, or different institutional strategies of delineating organs at risk are rarely considered. Moreover, the reduction of the 3-dimentional dose distribution into a '2dimensl' DVH parameter implies that the localization of the dose within an organ is irrelevant-there are ample examples that this assumption is not justified. Routinely used dose constraints also do not take into account that the residual function of an organ may be

  7. Mathematical model of normal tissue injury in telegammatherapy

    International Nuclear Information System (INIS)

    Belov, S.A.; Lyass, F.M.; Mamin, R.G.; Minakova, E.I.; Raevskaya, S.A.

    1983-01-01

    A model of normal tissue injury as a result of exposure to ionizing radiation is based on an assumption that the degree of tissue injury is determined by the degree of destruction by certain critical cells. The dependence of the number of lethal injuriies on a single dose is expressed by a trinomial - linear and quadratic parts and a constant, obtained as a result of the processing of experimental data. Quantitative correlations have been obtained for the skin and brain. They have been tested using clinical and experimental material. The results of the testing point out to the absence of time dependence on a single up to 6-week irradiation cources. Correlation with an irradiation field has been obtained for the skin. A conclusion has been made that the concept of isoefficacy of irradiation cources is conditional. Spatial-time fractionation is a promising direction in the development of radiation therapy

  8. Modeling Electronic Skin Response to Normal Distributed Force

    Directory of Open Access Journals (Sweden)

    Lucia Seminara

    2018-02-01

    Full Text Available The reference electronic skin is a sensor array based on PVDF (Polyvinylidene fluoride piezoelectric polymers, coupled to a rigid substrate and covered by an elastomer layer. It is first evaluated how a distributed normal force (Hertzian distribution is transmitted to an extended PVDF sensor through the elastomer layer. A simplified approach based on Boussinesq’s half-space assumption is used to get a qualitative picture and extensive FEM simulations allow determination of the quantitative response for the actual finite elastomer layer. The ultimate use of the present model is to estimate the electrical sensor output from a measure of a basic mechanical action at the skin surface. However this requires that the PVDF piezoelectric coefficient be known a-priori. This was not the case in the present investigation. However, the numerical model has been used to fit experimental data from a real skin prototype and to estimate the sensor piezoelectric coefficient. It turned out that this value depends on the preload and decreases as a result of PVDF aging and fatigue. This framework contains all the fundamental ingredients of a fully predictive model, suggesting a number of future developments potentially useful for skin design and validation of the fabrication technology.

  9. Locating critical points on multi-dimensional surfaces by genetic algorithm: test cases including normal and perturbed argon clusters

    Science.gov (United States)

    Chaudhury, Pinaki; Bhattacharyya, S. P.

    1999-03-01

    It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.

  10. Steady-state analysis of activated sludge processes with a settler model including sludge compression.

    Science.gov (United States)

    Diehl, S; Zambrano, J; Carlsson, B

    2016-01-01

    A reduced model of a completely stirred-tank bioreactor coupled to a settling tank with recycle is analyzed in its steady states. In the reactor, the concentrations of one dominant particulate biomass and one soluble substrate component are modelled. While the biomass decay rate is assumed to be constant, growth kinetics can depend on both substrate and biomass concentrations, and optionally model substrate inhibition. Compressive and hindered settling phenomena are included using the Bürger-Diehl settler model, which consists of a partial differential equation. Steady-state solutions of this partial differential equation are obtained from an ordinary differential equation, making steady-state analysis of the entire plant difficult. A key result showing that the ordinary differential equation can be replaced with an approximate algebraic equation simplifies model analysis. This algebraic equation takes the location of the sludge-blanket during normal operation into account, allowing for the limiting flux capacity caused by compressive settling to easily be included in the steady-state mass balance equations for the entire plant system. This novel approach grants the possibility of more realistic solutions than other previously published reduced models, comprised of yet simpler settler assumptions. The steady-state concentrations, solids residence time, and the wastage flow ratio are functions of the recycle ratio. Solutions are shown for various growth kinetics; with different values of biomass decay rate, influent volumetric flow, and substrate concentration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Robust modeling of differential gene expression data using normal/independent distributions: a Bayesian approach.

    Directory of Open Access Journals (Sweden)

    Mojtaba Ganjali

    Full Text Available In this paper, the problem of identifying differentially expressed genes under different conditions using gene expression microarray data, in the presence of outliers, is discussed. For this purpose, the robust modeling of gene expression data using some powerful distributions known as normal/independent distributions is considered. These distributions include the Student's t and normal distributions which have been used previously, but also include extensions such as the slash, the contaminated normal and the Laplace distributions. The purpose of this paper is to identify differentially expressed genes by considering these distributional assumptions instead of the normal distribution. A Bayesian approach using the Markov Chain Monte Carlo method is adopted for parameter estimation. Two publicly available gene expression data sets are analyzed using the proposed approach. The use of the robust models for detecting differentially expressed genes is investigated. This investigation shows that the choice of model for differentiating gene expression data is very important. This is due to the small number of replicates for each gene and the existence of outlying data. Comparison of the performance of these models is made using different statistical criteria and the ROC curve. The method is illustrated using some simulation studies. We demonstrate the flexibility of these robust models in identifying differentially expressed genes.

  12. Log-normal censored regression model detecting prognostic factors in gastric cancer: a study of 3018 cases.

    Science.gov (United States)

    Wang, Bin-Bin; Liu, Cai-Gang; Lu, Ping; Latengbaolide, A; Lu, Yang

    2011-06-21

    To investigate the efficiency of Cox proportional hazard model in detecting prognostic factors for gastric cancer. We used the log-normal regression model to evaluate prognostic factors in gastric cancer and compared it with the Cox model. Three thousand and eighteen gastric cancer patients who received a gastrectomy between 1980 and 2004 were retrospectively evaluated. Clinic-pathological factors were included in a log-normal model as well as Cox model. The akaike information criterion (AIC) was employed to compare the efficiency of both models. Univariate analysis indicated that age at diagnosis, past history, cancer location, distant metastasis status, surgical curative degree, combined other organ resection, Borrmann type, Lauren's classification, pT stage, total dissected nodes and pN stage were prognostic factors in both log-normal and Cox models. In the final multivariate model, age at diagnosis, past history, surgical curative degree, Borrmann type, Lauren's classification, pT stage, and pN stage were significant prognostic factors in both log-normal and Cox models. However, cancer location, distant metastasis status, and histology types were found to be significant prognostic factors in log-normal results alone. According to AIC, the log-normal model performed better than the Cox proportional hazard model (AIC value: 2534.72 vs 1693.56). It is suggested that the log-normal regression model can be a useful statistical model to evaluate prognostic factors instead of the Cox proportional hazard model.

  13. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  14. Reference man models based on normal data from human populations

    International Nuclear Information System (INIS)

    Tanaka, Gi-ichiro; Kawamura, Hisao

    2000-01-01

    Quantitative description of the physical, and metabolic parameters of the human body is the very basic for internal dosimetry. Compilation of anatomical and other types of data Asian populations for internal (and external) dosimetry is of grate significance because of the potential spread of nuclear energy use in the Asian region and the major contribution of the region to the world population (about 58%). It has been observed that some differences exist for habitat, race, body sizes and pattern of food consumption. In the early stage of revision of ICRP Reference man by the Task Group, Characteristics of the human body of non-European populations received considerable attention as well as those of the European populations of different sexes and ages. In this context, an IAEA-RCA Co-ordinated Research Program on Compilation of Anatomical, Physiological and Metabolic Characteristics for a Reference Asian Man endorsed. In later stages of reference Man revision, anatomical data for Asians was discusses together with those of European populations, presumably due to ICRP's decision of unanimous use of the Reference Man for radiation protection. Reference man models for adults and 15, 10, 5, 1, 0 year-old males and females of Asian populations were developed for use in internal and external dosimetry. Based on the concept of ICRP Reference Man (Publication 23), the reference values were derived from the normal organ mass data for Japanese and statistical data on the physique and nutrition of Japanese and Chinese. Also incorporated were variations in physical measurements, as observed in the above mentioned IAEA-RCA Co-ordinated Research Program. The work was partly carried out within the activities of the ICRP Task Group on Reference Man. The weight of the skeleton was adjusted following the revised values in Publication 70. This paper will report basic shared and non-shared characteristics of Reference Man' for Asians and ICRP Reference Man. (author)

  15. Numerical modelling of pyrolysis in normal and reduced oxygen concentration

    International Nuclear Information System (INIS)

    Kacem, Ahmed

    2016-01-01

    The predictive capability of computational fluid dynamics (CFD) fire models depends on the accuracy with which the source term due to fuel pyrolysis can be determined. The pyrolysis rate is a key parameter controlling fire behavior, which in turn drives the heat feedback from the flame to the fuel surface. In the present study an in-depth pyrolysis model of a semi-transparent solid fuel (here, clear polymethyl methacrylate or PMMA) with spectrally-resolved radiation and a moving gas/solid interface was coupled with the CFD code ISIS of the IRSN which included turbulence, combustion and radiation for the gas phase. A combined genetic algorithm/pyrolysis model was used with Cone Calorimeter data from a pure pyrolysis experiment to estimate a unique set of kinetic parameters for PMMA pyrolysis. In order to validate the coupled model, ambient air flaming experiments were conducted on square slabs of PMMA with side lengths of 10, 20 and 40 cm. From measurements at the center of the slab, it was found that i) for any sample size, the experimental regression rate becomes almost constant with time, and ii) although the radiative and total heat transfers increase significantly with the sample size, the radiative contribution to the total heat flux remains almost constant (∼80%). Coupled model results show a fairly good agreement with the literature and with current measurements of the heat fluxes, gas temperature and regressing surface rate at the center of the slabs. Discrepancies between predicted and measured total pyrolysis rate are observed, which result from the underestimation of the flame heat flux feedback at the edges of the slab, as confirmed by the comparison between predicted and observed topography of burned samples. Predicted flame heights based on a threshold temperature criterion were found to be close to those deduced from the correlation of Heskestad. Finally, in order to predict the pyrolysis of PMMA under reduced ambient oxygen concentration, a two

  16. BALANCED SCORECARDS EVALUATION MODEL THAT INCLUDES ELEMENTS OF ENVIRONMENTAL MANAGEMENT SYSTEM USING AHP MODEL

    Directory of Open Access Journals (Sweden)

    Jelena Jovanović

    2010-03-01

    Full Text Available The research is oriented on improvement of environmental management system (EMS using BSC (Balanced Scorecard model that presents strategic model of measurem ents and improvement of organisational performance. The research will present approach of objectives and environmental management me trics involvement (proposed by literature review in conventional BSC in "Ad Barska plovi dba" organisation. Further we will test creation of ECO-BSC model based on business activities of non-profit organisations in order to improve envir onmental management system in parallel with other systems of management. Using this approach we may obtain 4 models of BSC that includ es elements of environmen tal management system for AD "Barska plovidba". Taking into acc ount that implementation and evaluation need long period of time in AD "Barska plovidba", the final choice will be based on 14598 (Information technology - Software product evaluation and ISO 9126 (Software engineering - Product quality using AHP method. Those standards are usually used for evaluation of quality software product and computer programs that serve in organisation as support and factors for development. So, AHP model will be bas ed on evolution criteria based on suggestion of ISO 9126 standards and types of evaluation from two evaluation teams. Members of team & will be experts in BSC and environmental management system that are not em ployed in AD "Barska Plovidba" organisation. The members of team 2 will be managers of AD "Barska Plovidba" organisation (including manage rs from environmental department. Merging results based on previously cr eated two AHP models, one can obtain the most appropriate BSC that includes elements of environmental management system. The chosen model will present at the same time suggestion for approach choice including ecological metrics in conventional BSC model for firm that has at least one ECO strategic orientation.

  17. Modeling cardiac β-adrenergic signaling with normalized-Hill differential equations: comparison with a biochemical model

    Directory of Open Access Journals (Sweden)

    Saucerman Jeffrey J

    2010-11-01

    Full Text Available Abstract Background New approaches are needed for large-scale predictive modeling of cellular signaling networks. While mass action and enzyme kinetic approaches require extensive biochemical data, current logic-based approaches are used primarily for qualitative predictions and have lacked direct quantitative comparison with biochemical models. Results We developed a logic-based differential equation modeling approach for cell signaling networks based on normalized Hill activation/inhibition functions controlled by logical AND and OR operators to characterize signaling crosstalk. Using this approach, we modeled the cardiac β1-adrenergic signaling network, including 36 reactions and 25 species. Direct comparison of this model to an extensively characterized and validated biochemical model of the same network revealed that the new model gave reasonably accurate predictions of key network properties, even with default parameters. Normalized Hill functions improved quantitative predictions of global functional relationships compared with prior logic-based approaches. Comprehensive sensitivity analysis revealed the significant role of PKA negative feedback on upstream signaling and the importance of phosphodiesterases as key negative regulators of the network. The model was then extended to incorporate recently identified protein interaction data involving integrin-mediated mechanotransduction. Conclusions The normalized-Hill differential equation modeling approach allows quantitative prediction of network functional relationships and dynamics, even in systems with limited biochemical data.

  18. Modeling cardiac β-adrenergic signaling with normalized-Hill differential equations: comparison with a biochemical model.

    Science.gov (United States)

    Kraeutler, Matthew J; Soltis, Anthony R; Saucerman, Jeffrey J

    2010-11-18

    New approaches are needed for large-scale predictive modeling of cellular signaling networks. While mass action and enzyme kinetic approaches require extensive biochemical data, current logic-based approaches are used primarily for qualitative predictions and have lacked direct quantitative comparison with biochemical models. We developed a logic-based differential equation modeling approach for cell signaling networks based on normalized Hill activation/inhibition functions controlled by logical AND and OR operators to characterize signaling crosstalk. Using this approach, we modeled the cardiac β1-adrenergic signaling network, including 36 reactions and 25 species. Direct comparison of this model to an extensively characterized and validated biochemical model of the same network revealed that the new model gave reasonably accurate predictions of key network properties, even with default parameters. Normalized Hill functions improved quantitative predictions of global functional relationships compared with prior logic-based approaches. Comprehensive sensitivity analysis revealed the significant role of PKA negative feedback on upstream signaling and the importance of phosphodiesterases as key negative regulators of the network. The model was then extended to incorporate recently identified protein interaction data involving integrin-mediated mechanotransduction. The normalized-Hill differential equation modeling approach allows quantitative prediction of network functional relationships and dynamics, even in systems with limited biochemical data.

  19. Including operational data in QMRA model: development and impact of model inputs.

    Science.gov (United States)

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  20. Statistical modelling of Poisson/log-normal data

    International Nuclear Information System (INIS)

    Miller, G.

    2007-01-01

    In statistical data fitting, self consistency is checked by examining the closeness of the quantity Χ 2 /NDF to 1, where Χ 2 is the sum of squares of data minus fit divided by standard deviation, and NDF is the number of data minus the number of fit parameters. In order to calculate Χ 2 one needs an expression for the standard deviation. In this note several alternative expressions for the standard deviation of data distributed according to a Poisson/log-normal distribution are proposed and evaluated by Monte Carlo simulation. Two preferred alternatives are identified. The use of replicate data to obtain uncertainty is problematic for a small number of replicates. A method to correct this problem is proposed. The log-normal approximation is good for sufficiently positive data. A modification of the log-normal approximation is proposed, which allows it to be used to test the hypothesis that the true value is zero. (authors)

  1. An Elementary Test of the Normal 2PL Model against the Normal 3PL Alternative. Research Report. ETS RR-06-14

    Science.gov (United States)

    Haberman, Shelby J.

    2006-01-01

    A simple score test of the normal two-parameter logistic (2PL) model is presented that examines the potential attraction of the normal three-parameter logistic (3PL) model for use with a particular item. Application is made to data from a test from the Praxis™ series. Results from this example raise the question whether the normal 3PL model should…

  2. The formulation and estimation of a spatial skew-normal generalized ordered-response model.

    Science.gov (United States)

    2016-06-01

    This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...

  3. Expanded rock blast modeling capabilities of DMC{_}BLAST, including buffer blasting

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S. [Sandia National Labs., Albuquerque, NM (United States); Tidman, J.P.; Chung, S.H. [ICI Explosives (Canada)

    1996-12-31

    A discrete element computer program named DMC{_}BLAST (Distinct Motion Code) has been under development since 1987 for modeling rock blasting. This program employs explicit time integration and uses spherical or cylindrical elements that are represented as circles in 2-D. DMC{_}BLAST calculations compare favorably with data from actual bench blasts. The blast modeling capabilities of DMC{_}BLAST have been expanded to include independently dipping geologic layers, top surface, bottom surface and pit floor. The pit can also now be defined using coordinates based on the toe of the bench. A method for modeling decked explosives has been developed which allows accurate treatment of the inert materials (stemming) in the explosive column and approximate treatment of different explosives in the same blasthole. A DMC{_}BLAST user can specify decking through a specific geologic layer with either inert material or a different explosive. Another new feature of DMC{_}BLAST is specification of an uplift angle which is the angle between the normal to the blasthole and a vector defining the direction of explosive loading on particles adjacent to the blasthole. A buffer (choke) blast capability has been added for situations where previously blasted material is adjacent to the free face of the bench preventing any significant lateral motion during the blast.

  4. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    International Nuclear Information System (INIS)

    Baidillah, Marlin R; Takei, Masahiro

    2017-01-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution. (paper)

  5. Complete Loss and Thermal Model of Power Semiconductors Including Device Rating Information

    DEFF Research Database (Denmark)

    Ma, Ke; Bahman, Amir Sajjad; Beczkowski, Szymon

    2015-01-01

    models, only the electrical loadings are focused and treated as design variables, while the device rating is normally pre-defined by experience with limited design flexibility. Consequently, a more complete loss and thermal model is proposed in this paper, which takes into account not only the electrical...

  6. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  7. On application of optimal control to SEIR normalized models: Pros and cons.

    Science.gov (United States)

    de Pinho, Maria do Rosario; Nogueira, Filipa Nunes

    2017-02-01

    In this work we normalize a SEIR model that incorporates exponential natural birth and death, as well as disease-caused death. We use optimal control to control by vaccination the spread of a generic infectious disease described by a normalized model with L1 cost. We discuss the pros and cons of SEIR normalized models when compared with classical models when optimal control with L1 costs are considered. Our discussion highlights the role of the cost. Additionally, we partially validate our numerical solutions for our optimal control problem with normalized models using the Maximum Principle.

  8. Modelling whole blood oxygen equilibrium: comparison of nine different models fitted to normal human data.

    Science.gov (United States)

    O'Riordan, J F; Goldstick, T K; Vida, L N; Honig, G R; Ernest, J T

    1985-01-01

    The ability of nine different models, prominent in the literature, to meaningfully characterize the oxygen-hemoglobin equilibrium curve (OHEC) of normal individuals was examined. Previously reported data (N = 33), obtained using the DCA-1 (Radiometer, Copenhagen), and new data (N = 8), obtained using the Hemox-Analyzer (TCS, Southampton, PA), from blood samples of normal, non-smoking volunteers were used and these devices were found to give statistically similar results. The OHECs were digitized and fitted to the models using least-squares techniques developed in this laboratory. The "goodness-of-fit" was determined by the root-mean-squared (RMS) error, the number of parameters, and the parameter redundancy, i.e., correlation between the parameters. The best RMS error did not necessarily indicate the best model. Most literature models consist of ratios of similar-order polynomials. These showed considerable parameter redundancy which made the curve fitting difficult. The best fits gave RMS errors as low as 0.2% saturation. The Hill model gave a good characterization over the saturation range 20%-98% with RMS errors of about 0.6% saturation. On the other hand, good characterizations over the entire range were given by several other models. The relative advantages and disadvantages of each model have been compared as well as the difficulties in fitting several of the models. No single model is best under all circumstances. The best model depends upon the particular circumstances for which it is to be utilized.

  9. Numerical Modeling of Normal-Mode Oscillations in Planetary Atmospheres: Application to Saturn and Titan

    Science.gov (United States)

    Friedson, Andrew James; Ding, Leon

    2015-11-01

    We have developed a numerical model to calculate the frequencies and eigenfunctions of adiabatic, non-radial normal-mode oscillations in the gas giants and Titan. The model solves the linearized momentum, energy, and continuity equations for the perturbation displacement, pressure, and density fields and solves Poisson’s equation for the perturbation gravitational potential. The response to effects associated with planetary rotation, including the Coriolis force, centrifugal force, and deformation of the equilibrium structure, is calculated numerically. This provides the capability to accurately compute the influence of rotation on the modes, even in the limit where mode frequency approaches the rotation rate, when analytical estimates based on functional perturbation analysis become inaccurate. This aspect of the model makes it ideal for studying the potential role of low-frequency modes for driving spiral density waves in the C ring that possess relatively low pattern speeds (Hedman, M.M and P.D. Nicholson, MNRAS 444, 1369-1388). In addition, the model can be used to explore the effect of internal differential rotation on the eigenfrequencies. We will (1) present examples of applying the model to calculate the properties of normal modes in Saturn and their relationship to observed spiral density waves in the C ring, and (2) discuss how the model is used to examine the response of the superrotating atmosphere of Titan to the gravitational tide exerted by Saturn. This research was supported by a grant from the NASA Planetary Atmosphere Program.

  10. Normal Mode Derived Models of the Physical Properties of Earth's Outer Core

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.; Wu, W.

    2017-12-01

    Earth's outer core, the largest reservoir of metal in our planet, is comprised of an iron alloy of an uncertain composition. Its dynamical behaviour is responsible for the generation of Earth's magnetic field, with convection driven both by thermal and chemical buoyancy fluxes. Existing models of the seismic velocity and density of the outer core exhibit some variation, and there are only a small number of models which aim to represent the outer core's density.It is therefore important that we develop a better understanding of the physical properties of the outer core. Though most of the outer core is likely to be well mixed, it is possible that the uppermost outer core is stably stratified: it may be enriched in light elements released during the growth of the solid, iron enriched, inner core; by elements dissolved from the mantle into the outer core; or by exsolution of compounds previously dissolved in the liquid metal which will eventually be swept into the mantle. The stratified layer may host MAC or Rossby waves and it could impede communication between the chemically differentiated mantle and outer core, including screening out some of the geodynamo's signal. We use normal mode center frequencies to estimate the physical properties of the outer core in a Bayesian framework. We estimate the mineral physical parameters needed to best produce velocity and density models of the outer core which are consistent with the normal mode observations. We require that our models satisfy realistic physical constraints. We create models of the outer core with and without a distinct uppermost layer and assess the importance of this region.Our normal mode-derived models are compared with observations of body waves which travel through the outer core. In particular, we consider SmKS waves which are especially sensitive to the uppermost outer core and are therefore an important way to understand the robustness of our models.

  11. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  12. Modelling of capacitance and threshold voltage for ultrathin normally ...

    Indian Academy of Sciences (India)

    2016-12-02

    Dec 2, 2016 ... The MATLAB- based simulation results of the developed model are compared graphically with the experimental results from the literature presented in §3. Finally the con- clusion has been drawn in §4. 2. Model development. The basic expression for sheet charge concentration in. HEMT device is given by ...

  13. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Modeling consonant perception in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    perception data: (i) an audibility-based approach, which corresponds to the Articu- lation Index (AI), and (ii) a modulation-masking based approach, as reflected in the speech-based Envelope Power Spectrum Model (sEPSM). For both models, the internal representations of the same stimuli as used...... impairment affect speech perception, it is advantageous to study the impact of these factors on the perception of the fundamental building blocks of speech. Non-sense syllables consisting of consonants and vow- els have thus typically been presented to listeners in masking noise at various signal...... in the experiment were calculated and fed into a template-matching back end. Using the experimental data as a reference, the resulting predictions of the two modeling approaches were compared and their respective suitability for the prediction of consonant perception was evaluated....

  15. Integrating normal and abnormal personality structure: the Five-Factor Model.

    Science.gov (United States)

    Widiger, Thomas A; Costa, Paul T

    2012-12-01

    It is evident that the conceptualization, diagnosis, and classification of personality disorder (PD) is shifting toward a dimensional model. The purpose of this special issue of Journal of Personality is to indicate how the Five-Factor Model (FFM) can provide a useful and meaningful basis for an integration of the description and classification of both normal and abnormal personality functioning. This introductory article discusses its empirical support and the potential advantages of understanding personality disorders, including those included within the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders and likely future PDs from the dimensional perspective of the FFM. © 2012 The Authors. Journal of Personality © 2012, Wiley Periodicals, Inc.

  16. Food addiction spectrum: a theoretical model from normality to eating and overeating disorders.

    Science.gov (United States)

    Piccinni, Armando; Marazziti, Donatella; Vanelli, Federica; Franceschini, Caterina; Baroni, Stefano; Costanzo, Davide; Cremone, Ivan Mirko; Veltri, Antonello; Dell'Osso, Liliana

    2015-01-01

    The authors comment on the recently proposed food addiction spectrum that represents a theoretical model to understand the continuum between several conditions ranging from normality to pathological states, including eating disorders and obesity, as well as why some individuals show a peculiar attachment to food that can become an addiction. Further, they review the possible neurobiological underpinnings of these conditions that include dopaminergic neurotransmission and circuits that have long been implicated in drug addiction. The aim of this article is also that at stimulating a debate regarding the possible model of a food (or eating) addiction spectrum that may be helpful towards the search of novel therapeutic approaches to different pathological states related to disturbed feeding or overeating.

  17. Modelling of capacitance and threshold voltage for ultrathin normally ...

    Indian Academy of Sciences (India)

    A compact quantitative model based on oxide semiconductor interface density of states (DOS) is proposed for Al 0.25 Ga 0.75 N/GaN metal oxide semiconductor high electron mobility transistor (MOSHEMT). Mathematical expressions for surface potential, sheet charge concentration, gate capacitance and threshold voltage ...

  18. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V. K; Stentoft, Lars

    2015-01-01

    We propose an asymmetric GARCH in mean mixture model and provide a feasible method for option pricing within this general framework by deriving the appropriate risk neutral dynamics. We forecast the out-of-sample prices of a large sample of options on the S&P 500 index from January 2006 to December...

  19. Individual loss reserving with the Multivariate Skew Normal model

    NARCIS (Netherlands)

    Pigeon, M.; Antonio, K.; Denuit, M.

    2011-01-01

    In general insurance, the evaluation of future cash ows and solvency capital has become increasingly important. To assist in this process, the present paper proposes an individual discrete-time loss re- serving model describing the occurrence, the reporting delay, the timeto the first payment, and

  20. Presenting Thin Media Models Affects Women's Choice of Diet or Normal Snacks

    Science.gov (United States)

    Krahe, Barbara; Krause, Christina

    2010-01-01

    Our study explored the influence of thin- versus normal-size media models and of self-reported restrained eating behavior on women's observed snacking behavior. Fifty female undergraduates saw a set of advertisements for beauty products showing either thin or computer-altered normal-size female models, allegedly as part of a study on effective…

  1. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  2. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  3. Modeling consonant perception in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    impairment affect speech perception, it is advantageous to study the impact of these factors on the perception of the fundamental building blocks of speech. Non-sense syllables consisting of consonants and vow- els have thus typically been presented to listeners in masking noise at various signal...... in the experiment were calculated and fed into a template-matching back end. Using the experimental data as a reference, the resulting predictions of the two modeling approaches were compared and their respective suitability for the prediction of consonant perception was evaluated....

  4. A finite element model of the lower limb during stance phase of gait cycle including the muscle forces.

    Science.gov (United States)

    Diffo Kaze, Arnaud; Maas, Stefan; Arnoux, Pierre-Jean; Wolf, Claude; Pape, Dietrich

    2017-12-07

    Results of finite element (FE) analyses can give insight into musculoskeletal diseases if physiological boundary conditions, which include the muscle forces during specific activities of daily life, are considered in the FE modelling. So far, many simplifications of the boundary conditions are currently made. This study presents an approach for FE modelling of the lower limb for which muscle forces were included. The stance phase of normal gait was simulated. Muscle forces were calculated using a musculoskeletal rigid body (RB) model of the human body, and were subsequently applied to a FE model of the lower limb. It was shown that the inertial forces are negligible during the stance phase of normal gait. The contact surfaces between the parts within the knee were modelled as bonded. Weak springs were attached to the distal tibia for numerical reasons. Hip joint reaction forces from the RB model and those from the FE model were similar in magnitude with relative differences less than 16%. The forces of the weak spring were negligible compared to the applied muscle forces. The maximal strain was 0.23% in the proximal region of the femoral diaphysis and 1.7% in the contact zone between the tibia and the fibula. The presented approach based on FE modelling by including muscle forces from inverse dynamic analysis of musculoskeletal RB model can be used to perform analyses of the lower limb with very realistic boundary conditions. In the present form, this model can be used to better understand the loading, stresses and strains of bones in the knee area and hence to analyse osteotomy fixation devices.

  5. A personalized microRNA microarray normalization method using a logistic regression model.

    Science.gov (United States)

    Wang, Bin; Wang, Xiao-Feng; Howell, Paul; Qian, Xuemin; Huang, Kun; Riker, Adam I; Ju, Jingfang; Xi, Yaguang

    2010-01-15

    MicroRNA (miRNA) is a set of newly discovered non-coding small RNA molecules. Its significant effects have contributed to a number of critical biological events including cell proliferation, apoptosis development, as well as tumorigenesis. High-dimensional genomic discovery platforms (e.g. microarray) have been employed to evaluate the important roles of miRNAs by analyzing their expression profiling. However, because of the small total number of miRNAs and the absence of well-known endogenous controls, the traditional normalization methods for messenger RNA (mRNA) profiling analysis could not offer a suitable solution for miRNA analysis. The need for the establishment of new adaptive methods has come to the forefront. Locked nucleic acid (LNA)-based miRNA array was employed to profile miRNAs using colorectal cancer cell lines under different treatments. The expression pattern of overall miRNA profiling was pre-evaluated by a panel of miRNAs using Taqman-based quantitative real-time polymerase chain reaction (qRT-PCR) miRNA assays. A logistic regression model was built based on qRT-PCR results and then applied to the normalization of miRNA array data. The expression levels of 20 additional miRNAs selected from the normalized list were post-validated. Compared with other popularly used normalization methods, the logistic regression model efficiently calibrates the variance across arrays and improves miRNA microarray discovery accuracy. Datasets and R package are available at http://gauss.usouthal.edu/publ/logit/.

  6. On application of optimal control to SEIR normalized models: Pros and cons

    OpenAIRE

    de Pinho, Maria Rosário; Nogueira, Filipa Nunes

    2017-01-01

    In this work we normalize a SEIR model that incorporates exponential natural birth and death, as well as disease-caused death. We use optimal control to control by vaccination the spread of a generic infectious disease described by a normalized model with L1 cost. We discuss the pros and cons of SEIR normalized models when compared with classical models when optimal control with L1 costs are considered. Our discussion highlights the role of the cost. Additionally, we partially validate our nu...

  7. The Lag Model, a Turbulence Model for Wall Bounded Flows Including Separation

    Science.gov (United States)

    Olsen, Michael E.; Coakley, Thomas J.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A new class of turbulence model is described for wall bounded, high Reynolds number flows. A specific turbulence model is demonstrated, with results for favorable and adverse pressure gradient flowfields. Separation predictions are as good or better than either Spalart Almaras or SST models, do not require specification of wall distance, and have similar or reduced computational effort compared with these models.

  8. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. A hierarchical model of normal and abnormal personality up to seven factors.

    Science.gov (United States)

    Gutiérrez, Fernando; Vall, Gemma; Peri, Josep M; Gárriz, Miguel; Garrido, Juan Miguel

    2014-02-01

    Despite general support for dimensional models of personality disorder, it is currently unclear which, and how many, dimensions a taxonomy of this kind should include. In an attempt to obtain an empirically-based, comprehensive, and usable structure of personality, three instruments - The Temperament and Character Inventory-Revised (TCI-R), the Personality Diagnostic Questionnaire-4+(PDQ-4+), and the Dimensional Assessment of Personality Pathology-Basic Questionnaire (DAPP-BQ) - were administered to 960 outpatients and their scales factor-analyzed following a bass ackwards approach. The resulting hierarchical structure was interpretable and replicable across gender and methods up to seven factors. This structure highlights coincidences among current dimensional models and clarifies their apparent divergences, and thus helps to delineate the unified taxonomy of normal and abnormal personality that the field requires. © 2014.

  10. Numerical modeling of normal turbulent plane jet impingement on solid wall

    Energy Technology Data Exchange (ETDEWEB)

    Guo, C.Y.; Maxwell, W.H.C.

    1984-10-01

    Attention is given to a numerical turbulence model for the impingement of a well developed normal plane jet on a solid wall, by means of which it is possible to express different jet impingement geometries in terms of different boundary conditions. Examples of these jets include those issuing from VTOL aircraft, chemical combustors, etc. The two-equation, turbulent kinetic energy-turbulent dissipation rate model is combined with the continuity equation and the transport equation of vorticity, using an iterative finite difference technique in the computations. Peak levels of turbulent kinetic energy occur not only in the impingement zone, but also in the intermingling zone between the edges of the free jet and the wall jet. 20 references.

  11. Estimator of a non-Gaussian parameter in multiplicative log-normal models

    Science.gov (United States)

    Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu

    2007-10-01

    We study non-Gaussian probability density functions (PDF’s) of multiplicative log-normal models in which the multiplication of Gaussian and log-normally distributed random variables is considered. To describe the PDF of the velocity difference between two points in fully developed turbulent flows, the non-Gaussian PDF model was originally introduced by Castaing [Physica D 46, 177 (1990)]. In practical applications, an experimental PDF is approximated with Castaing’s model by tuning a single non-Gaussian parameter, which corresponds to the logarithmic variance of the log-normally distributed variable in the model. In this paper, we propose an estimator of the non-Gaussian parameter based on the q th order absolute moments. To test the estimator, we introduce two types of stochastic processes within the framework of the multiplicative log-normal model. One is a sequence of independent and identically distributed random variables. The other is a log-normal cascade-type multiplicative process. By analyzing the numerically generated time series, we demonstrate that the estimator can reliably determine the theoretical value of the non-Gaussian parameter. Scale dependence of the non-Gaussian parameter in multiplicative log-normal models is also studied, both analytically and numerically. As an application of the estimator, we demonstrate that non-Gaussian PDF’s observed in the S&P500 index fluctuations are well described by the multiplicative log-normal model.

  12. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  13. Comprehensive processing of high-throughput small RNA sequencing data including quality checking, normalization, and differential expression analysis using the UEA sRNA Workbench.

    Science.gov (United States)

    Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent

    2017-06-01

    Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. © 2017 Beckers et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  14. How reassuring is a normal breast ultrasound in assessment of a screen-detected mammographic abnormality? A review of interval cancers after assessment that included ultrasound evaluation

    International Nuclear Information System (INIS)

    Bennett, M.L.; Welman, C.J.; Celliers, L.M.

    2011-01-01

    Aim: To review factors resulting in a false-negative outcome or delayed cancer diagnosis in women recalled for further evaluation, including ultrasound, after an abnormal screening mammogram. Materials and methods: Of 646,692 screening mammograms performed between 1 January 1995 and 31 December 2004, 34,533 women were recalled for further assessment. Nine hundred and sixty-four interval cancers were reported in this period. Forty-six of these women had been recalled for further assessment, which specifically included ultrasound evaluation in the preceding 24 months, and therefore, met the inclusion criteria for this study. Screening mammograms, further mammographic views, ultrasound scans, clinical findings, and histopathology results were retrospectively reviewed by two consultant breast radiologists. Results: The interval cancer developed in the contralateral breast (n = 9), ipsilateral breast, but different site (n = 6), and ipsilateral breast at the same site (n = 31) as the abnormality for which they had recently been recalled. In the latter group, 10 were retrospectively classified as a false-negative outcome, nine had a delay in obtaining a biopsy, and 12 had a delay due to a non-diagnostic initial biopsy. Various factors relating to these outcomes are discussed. Conclusion: Out of 34,533 women who attended for an assessment visit and the 46 women who subsequently developed an interval breast cancer, 15 were true interval cancers, 10 had a false-negative assessment outcome, and 21 had a delay to cancer diagnosis on the basis of a number of factors. When there is discrepancy between the imaging and histopathology results, a repeat biopsy rather than early follow-up would have avoided a delay in some cases. A normal ultrasound examination should not deter the radiologist from proceeding to stereotactic biopsy, if the index mammographic lesion is suspicious of malignancy.

  15. Exploring the experiences of older Chinese adults with comorbidities including diabetes: surmounting these challenges in order to live a normal life

    Science.gov (United States)

    Ho, Hsiu-Yu; Chen, Mei-Hui

    2018-01-01

    Background Many people with diabetes have comorbidities, even multimorbidities, which have a far-reaching impact on the older adults, their family, and society. However, little is known of the experience of older adults living with comorbidities that include diabetes. Aim The aim of this study was to explore the experience of older adults living with comorbidities including diabetes. Methods A qualitative approach was employed. Data were collected from a selected field of 12 patients with diabetes mellitus in a medical center in northern Taiwan. The data were analyzed by Colaizzi’s phenomenological methodology, and four criteria of Lincoln and Guba were used to evaluate the rigor of the study. Results The following 5 themes and 14 subthemes were derived: 1) expecting to heal or reduce the symptoms of the disease (trying to alleviate the distress of symptoms and trusting in health practitioners combining the use of Chinese and Western medicines); 2) comparing complex medical treatments (differences in physician practices and presentation, conditionally adhering to medical treatment, and partnering with medical professionals); 3) inconsistent information (inconsistent health information and inconsistent medical advice); 4) impacting on daily life (activities are limited and hobbies cannot be maintained and psychological distress); and 5) weighing the pros and cons (taking the initiative to deal with issues, limiting activity, adjusting mental outlook and pace of life, developing strategies for individual health regimens, and seeking support). Surmounting these challenges in order to live a normal life was explored. Conclusion This study found that the experience of older adults living with comorbidities including diabetes was similar to that of a single disease, but the extent was greater than a single disease. The biggest difference is that the elderly think that their most serious problem is not diabetes, but rather, the comorbidities causing life limitations

  16. Application of a Brittle Damage Model to Normal Plate-on-Plate Impact

    National Research Council Canada - National Science Library

    Raftenberg, Martin N

    2005-01-01

    A brittle damage model presented by Grinfeld and Wright of the U.S. Army Research Laboratory was implemented in the LS-DYNA finite element code and applied to the simulation of normal plate-on-plate impact...

  17. Currents, HF Radio-derived, SF Bay Outlet, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  18. Currents, HF Radio-derived, Ano Nuevo, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  19. Currents, HF Radio-derived, Monterey Bay, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  20. Modelling Mediterranean agro-ecosystems by including agricultural trees in the LPJmL model

    Science.gov (United States)

    Fader, M.; von Bloh, W.; Shi, S.; Bondeau, A.; Cramer, W.

    2015-11-01

    In the Mediterranean region, climate and land use change are expected to impact on natural and agricultural ecosystems by warming, reduced rainfall, direct degradation of ecosystems and biodiversity loss. Human population growth and socioeconomic changes, notably on the eastern and southern shores, will require increases in food production and put additional pressure on agro-ecosystems and water resources. Coping with these challenges requires informed decisions that, in turn, require assessments by means of a comprehensive agro-ecosystem and hydrological model. This study presents the inclusion of 10 Mediterranean agricultural plants, mainly perennial crops, in an agro-ecosystem model (Lund-Potsdam-Jena managed Land - LPJmL): nut trees, date palms, citrus trees, orchards, olive trees, grapes, cotton, potatoes, vegetables and fodder grasses. The model was successfully tested in three model outputs: agricultural yields, irrigation requirements and soil carbon density. With the development presented in this study, LPJmL is now able to simulate in good detail and mechanistically the functioning of Mediterranean agriculture with a comprehensive representation of ecophysiological processes for all vegetation types (natural and agricultural) and in a consistent framework that produces estimates of carbon, agricultural and hydrological variables for the entire Mediterranean basin. This development paves the way for further model extensions aiming at the representation of alternative agro-ecosystems (e.g. agroforestry), and opens the door for a large number of applications in the Mediterranean region, for example assessments of the consequences of land use transitions, the influence of management practices and climate change impacts.

  1. TWNFI--a transductive neuro-fuzzy inference system with weighted data normalization for personalized modeling.

    Science.gov (United States)

    Song, Qun; Kasabov, Nikola

    2006-12-01

    This paper introduces a novel transductive neuro-fuzzy inference model with weighted data normalization (TWNFI). In transductive systems a local model is developed for every new input vector, based on a certain number of data that are selected from the training data set and the closest to this vector. The weighted data normalization method (WDN) optimizes the data normalization ranges of the input variables for the model. A steepest descent algorithm is used for training the TWNFI models. The TWNFI is compared with some other widely used connectionist systems on two case study problems: Mackey-Glass time series prediction and a real medical decision support problem of estimating the level of renal function of a patient. The TWNFI method not only results in a "personalized" model with a better accuracy of prediction for a single new sample, but also depicts the most significant input variables (features) for the model that may be used for a personalized medicine.

  2. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    Directory of Open Access Journals (Sweden)

    Miroslaw Luft

    2008-01-01

    Full Text Available The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  3. Mathematical model of thyristor inverter including a series-parallel resonant circuit

    OpenAIRE

    Luft, M.; Szychta, E.

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with the aid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  4. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    OpenAIRE

    Miroslaw Luft; Elzbieta Szychta

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  5. Modeling of Pem Fuel Cell Systems Including Controls and Reforming Effects for Hybrid Automotive Applications

    National Research Council Canada - National Science Library

    Boettner, Daisie

    2001-01-01

    .... This study develops models for a stand-alone Proton Exchange Membrane (PEM) fuel cell stack, a direct-hydrogen fuel cell system including auxiliaries, and a methanol reforming fuel cell system for integration into a vehicle performance simulator...

  6. Simplification and Validation of a Spectral-Tensor Model for Turbulence Including Atmospheric Stability

    Science.gov (United States)

    Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.

    2018-02-01

    A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L , a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.

  7. Atmosphere-soil-vegetation model including CO2 exchange processes: SOLVEG2

    International Nuclear Information System (INIS)

    Nagai, Haruyasu

    2004-11-01

    A new atmosphere-soil-vegetation model named SOLVEG2 (SOLVEG version 2) was developed to study the heat, water, and CO 2 exchanges between the atmosphere and land-surface. The model consists of one-dimensional multilayer sub-models for the atmosphere, soil, and vegetation. It also includes sophisticated processes for solar and long-wave radiation transmission in vegetation canopy and CO 2 exchanges among the atmosphere, soil, and vegetation. Although the model usually simulates only vertical variation of variables in the surface-layer atmosphere, soil, and vegetation canopy by using meteorological data as top boundary conditions, it can be used by coupling with a three-dimensional atmosphere model. In this paper, details of SOLVEG2, which includes the function of coupling with atmosphere model MM5, are described. (author)

  8. Combining prior knowledge with data driven modeling of a batch distillation column including start-up

    NARCIS (Netherlands)

    van Lith, PF; Betlem, BHL; Roffel, B

    2003-01-01

    This paper presents the development of a simple model which describes the product quality and production over time of an experimental batch distillation column, including start-up. The model structure is based on a simple physical framework, which is augmented with fuzzy logic. This provides a way

  9. Enhanced UWB Radio Channel Model for Short-Range Communication Scenarios Including User Dynamics

    DEFF Research Database (Denmark)

    Kovacs, Istvan Zsolt; Nguyen, Tuan Hung; Eggers, Patrick Claus F.

    2005-01-01

    channel model represents an enhancement of the existing IEEE 802.15.3a/4a PAN channel model, where antenna and user-proximity effects are not included. Our investigations showed that significant variations of the received wideband power and time-delay signal clustering are possible due the human body...

  10. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  11. Mathematical modeling of the hypothalamic–pituitary–adrenal gland (HPA) axis, including hippocampal mechanisms

    DEFF Research Database (Denmark)

    Andersen, Morten; Vinther, Frank; Ottesen, Johnny T.

    2013-01-01

    This paper presents a mathematical model of the HPA axis. The HPA axis consists of the hypothalamus, the pituitary and the adrenal glands in which the three hormones CRH, ACTH and cortisol interact through receptor dynamics. Furthermore, it has been suggested that receptors in the hippocampus have...... an influence on the axis.A model is presented with three coupled, non-linear differential equations, with the hormones CRH, ACTH and cortisol as variables. The model includes the known features of the HPA axis, and includes the effects from the hippocampus through its impact on CRH in the hypothalamus...

  12. Modification of TOUGH2 to Include the Dusty Gas Model for Gas Diffusion; TOPICAL

    International Nuclear Information System (INIS)

    WEBB, STEPHEN W.

    2001-01-01

    The GEO-SEQ Project is investigating methods for geological sequestration of CO(sub 2). This project, which is directed by LBNL and includes a number of other industrial, university, and national laboratory partners, is evaluating computer simulation methods including TOUGH2 for this problem. The TOUGH2 code, which is a widely used code for flow and transport in porous and fractured media, includes simplified methods for gas diffusion based on a direct application of Fick's law. As shown by Webb (1998) and others, the Dusty Gas Model (DGM) is better than Fick's Law for modeling gas-phase diffusion in porous media. In order to improve gas-phase diffusion modeling for the GEO-SEQ Project, the EOS7R module in the TOUGH2 code has been modified to include the Dusty Gas Model as documented in this report. In addition, the liquid diffusion model has been changed from a mass-based formulation to a mole-based model. Modifications for separate and coupled diffusion in the gas and liquid phases have also been completed. The results from the DGM are compared to the Fick's law behavior for TCE and PCE diffusion across a capillary fringe. The differences are small due to the relatively high permeability (k= 10(sup -11) m(sup 2)) of the problem and the small mole fraction of the gases. Additional comparisons for lower permeabilities and higher mole fractions may be useful

  13. Blood Vessel Normalization in the Hamster Oral Cancer Model for Experimental Cancer Therapy Studies

    Energy Technology Data Exchange (ETDEWEB)

    Ana J. Molinari; Romina F. Aromando; Maria E. Itoiz; Marcela A. Garabalino; Andrea Monti Hughes; Elisa M. Heber; Emiliano C. C. Pozzi; David W. Nigg; Veronica A. Trivillin; Amanda E. Schwint

    2012-07-01

    Normalization of tumor blood vessels improves drug and oxygen delivery to cancer cells. The aim of this study was to develop a technique to normalize blood vessels in the hamster cheek pouch model of oral cancer. Materials and Methods: Tumor-bearing hamsters were treated with thalidomide and were compared with controls. Results: Twenty eight hours after treatment with thalidomide, the blood vessels of premalignant tissue observable in vivo became narrower and less tortuous than those of controls; Evans Blue Dye extravasation in tumor was significantly reduced (indicating a reduction in aberrant tumor vascular hyperpermeability that compromises blood flow), and tumor blood vessel morphology in histological sections, labeled for Factor VIII, revealed a significant reduction in compressive forces. These findings indicated blood vessel normalization with a window of 48 h. Conclusion: The technique developed herein has rendered the hamster oral cancer model amenable to research, with the potential benefit of vascular normalization in head and neck cancer therapy.

  14. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  15. Numerical Acoustic Models Including Viscous and Thermal losses: Review of Existing and New Methods

    DEFF Research Database (Denmark)

    Andersen, Peter Risby; Cutanda Henriquez, Vicente; Aage, Niels

    2017-01-01

    This work presents an updated overview of numerical methods including acoustic viscous and thermal losses. Numerical modelling of viscothermal losses has gradually become more important due to the general trend of making acoustic devices smaller. Not including viscothermal acoustic losses...... in such numerical computations will therefore lead to inaccurate or even wrong results. Both, Finite Element Method (FEM) and Boundary Element Method (BEM), formulations are available that incorporate these loss mechanisms. Including viscothermal losses in FEM computations can be computationally very demanding, due...... and BEM method including viscothermal dissipation are compared and investigated....

  16. Description of the new version 4.0 of the tritium model UFOTRI including user guide

    International Nuclear Information System (INIS)

    Raskob, W.

    1993-08-01

    In view of the future operation of fusion reactors the release of tritium may play a dominant role during normal operation as well as after accidents. Because of its physical and chemical properties which differ significantly from those of other radionuclides, the model UFOTRI for assessing the radiological consequences of accidental tritium releases has been developed. It describes the behaviour of tritium in the biosphere and calculates the radiological impact on individuals and the population due to the direct exposure and by the ingestion pathways. Processes such as the conversion of tritium gas into tritiated water (HTO) in the soil, re-emission after deposition and the conversion of HTO into organically bound tritium, are considered. The use of UFOTRI in its probabilistic mode shows the spectrum of the radiological impact together with the associated probability of occurrence. A first model version was established in 1991. As the ongoing work on investigating the main processes of the tritium behaviour in the environment shows up new results, the model has been improved in several points. The report describes the changes incorporated into the model since 1991. Additionally provides the up-dated user guide for handling the revised UFOTRI version which will be distributed to interested organizations. (orig.) [de

  17. Direct-phase-variable model of a synchronous reluctance motor including all slot and winding harmonics

    International Nuclear Information System (INIS)

    Obe, Emeka S.; Binder, A.

    2011-01-01

    A detailed model in direct-phase variables of a synchronous reluctance motor operating at mains voltage and frequency is presented. The model includes the stator and rotor slot openings, the actual winding layout and the reluctance rotor geometry. Hence, all mmf and permeance harmonics are taken into account. It is seen that non-negligible harmonics introduced by slots are present in the inductances computed by the winding function procedure. These harmonics are usually ignored in d-q models. The machine performance is simulated in the stator reference frame to depict the difference between this new direct-phase model including all harmonics and the conventional rotor reference frame d-q model. Saturation is included by using a polynomial fitting the variation of d-axis inductance with stator current obtained by finite-element software FEMAG DC (registered) . The detailed phase-variable model can yield torque pulsations comparable to those obtained from finite elements while the d-q model cannot.

  18. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  19. Biological mechanisms of normal tissue damage : Importance for the design of NTCP models

    NARCIS (Netherlands)

    Trott, Klaus-Ruediger; Doerr, Wolfgang; Facoetti, Angelica; Hopewell, John; Langendijk, Johannes; van Luijk, Peter; Ottolenghi, Andrea; Smyth, Vere

    2012-01-01

    The normal tissue complication probability (NTCP) models that are currently being proposed for estimation of risk of harm following radiotherapy are mainly based on simplified empirical models, consisting of dose,distribution parameters, possibly combined with clinical or other treatment-related

  20. Expression patterns of DLK1 and INSL3 identify stages of Leydig cell differentiation during normal development and in testicular pathologies, including testicular cancer and Klinefelter syndrome

    DEFF Research Database (Denmark)

    Lottrup, G; Nielsen, J E; Maroun, L L

    2014-01-01

    STUDY QUESTION: What is the differentiation stage of human testicular interstitial cells, in particular Leydig cells (LC), within micronodules found in patients with infertility, testicular cancer and Klinefelter syndrome? SUMMARY ANSWER: The Leydig- and peritubular-cell populations in testes....... MAIN RESULTS AND THE ROLE OF CHANCE: DLK1, INSL3 and COUP-TFII expression changed during normal development and was linked to different stages of LC differentiation: DLK1 was expressed in all fetal LCs, but only in spindle-shaped progenitor cells and in a small subset of polygonal LCs in the normal...... adult testis; INSL3 was expressed in a subset of fetal LCs, but in the majority of adult LCs; and COUP-TFII was expressed in peritubular and mesenchymal stroma cells at all ages, in fetal LCs early in gestation and in a subset of adult LCs. CYP11A1 was expressed in the majority of LCs regardless of age...

  1. American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution

    DEFF Research Database (Denmark)

    Stentoft, Lars Peter

    In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpre....... In particular, improvements are found when considering the smile in implied standard deviations.......In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... the effect of the riskneutralization, and we derive approximation procedures which allow for a computationally efficient implementation of the model. When the model is estimated on financial returns data the results indicate that compared to the Gaussian case the extension is important. A study of the model...

  2. Modeling of Temperature-Dependent Noise in Silicon Nanowire FETs including Self-Heating Effects

    OpenAIRE

    Anandan, P.; Malathi, N.; Mohankumar, N.

    2014-01-01

    Silicon nanowires are leading the CMOS era towards the downsizing limit and its nature will be effectively suppress the short channel effects. Accurate modeling of thermal noise in nanowires is crucial for RF applications of nano-CMOS emerging technologies. In this work, a perfect temperature-dependent model for silicon nanowires including the self-heating effects has been derived and its effects on device parameters have been observed. The power spectral density as a function of thermal resi...

  3. Dipole model analysis of highest precision HERA data, including very low Q2's

    International Nuclear Information System (INIS)

    Luszczak, A.; Kowalski, H.

    2016-12-01

    We analyse, within a dipole model, the final, inclusive HERA DIS cross section data in the low χ region, using fully correlated errors. We show, that these highest precision data are very well described within the dipole model framework starting from Q 2 values of 3.5 GeV 2 to the highest values of Q 2 =250 GeV 2 . To analyze the saturation effects we evaluated the data including also the very low 0.35including this region show a preference of the saturation ansatz.

  4. Delayed detonation models for normal and subluminous type Ia sueprnovae: Absolute brightness, light curves, and molecule formation

    Science.gov (United States)

    Hoflich, P.; Khokhlov, A. M.; Wheeler, J. C.

    1995-01-01

    We compute optical and infrared light curves of the pulsating class of delayed detonation models for Type Ia supernovae (SN Ia's) using an elaborate treatment of the Local Thermodynamic Equilbrium (LTE) radiation transport, equation of state and ionization balance, expansion opacity including the cooling by CO, Co(+), and SiO, and a Monte Carlo gamma-ray deposition scheme. The models have an amount of Ni-56 in the range from approximately or equal to 0.1 solar mass up to 0.7 solar mass depending on the density at which the transition from a deflagration to a detonation occurs. Models with a large nickel production give light curves comparable to those of typical Type Ia supernovae. Subluminous supernovae can be explained by models with a low nickel production. Multiband light curves are presented in comparison with the normally bright event SN 1992bc and the subluminous events Sn 1991bg and SN 1992bo to establish the principle that the delayed detonation paradigm in Chandrasekhar mass models may give a common explosion mechanism accounting for both normal and subluminous SN Ia's. Secondary IR-maxima are formed in the models of normal SN Ia's as a photospheric effect if the photospheric radius continues to increase well after maximum light. Secondary maxima appear later and stronger in models with moderate expansion velocities and with radioactive material closer to the surface. Model light curves for subluminous SN Ia's tend to show only one 'late' IR-maximum. In some delayed detonation models shell-like envelopes form, which consist of unburned carbon and oxygen. The formation of molecules in these envelopes is addressed. If the model retains a C/O-envelope and is subluminous, strong vibration bands of CO may appear, typically several weeks past maximum light. CO should be very weak or absent in normal Sn Ia's.

  5. The No-Core Gamow Shell Model: Including the continuum in the NCSM

    CERN Document Server

    Barrett, B R; Michel, N; Płoszajczak, M

    2015-01-01

    We are witnessing an era of intense experimental efforts that will provide information about the properties of nuclei far from the line of stability, regarding resonant and scattering states as well as (weakly) bound states. This talk describes our formalism for including these necessary ingredients into the No-Core Shell Model by using the Gamow Shell Model approach. Applications of this new approach, known as the No-Core Gamow Shell Model, both to benchmark cases as well as to unstable nuclei will be given.

  6. Perhitungan Iuran Normal Program Pensiun dengan Asumsi Suku Bunga Mengikuti Model Vasicek

    Directory of Open Access Journals (Sweden)

    I Nyoman Widana

    2017-12-01

    Full Text Available Labor has a very important role for national development. One way to optimize their productivity is to guarantee a certainty to earn income after retirement. Therefore the government and the private sector must have a program that can ensure the sustainability of this financial support. One option is a pension plan. The purpose of this study is to calculate the  normal cost  with the interest rate assumed to follow the Vasicek model and analyze the normal contribution of the pension program participants. Vasicek model is used to match with  the actual conditions. The method used in this research is the Projected Unit Credit Method and the Entry Age Normal method. The data source of this research is lecturers of FMIPA Unud. In addition, secondary data is also used in the form of the interest  rate of Bank Indonesia for the period of January 2006-December 2015. The results of this study indicate that  the older the age of the participants, when starting the pension program, the greater the first year normal cost  and the smaller the benefit which he or she  will get. Then, normal cost with constant interest rate  greater than normal cost with Vasicek interest rate. This occurs because the Vasicek model predicts interest between 4.8879%, up to 6.8384%. While constant interest is only 4.25%.  In addition, using normal cost that proportional to salary, it is found that the older the age of the participants the greater the proportion of the salary for normal cost.

  7. Conceptualizing a Dynamic Fall Risk Model Including Intrinsic Risks and Exposures.

    Science.gov (United States)

    Klenk, Jochen; Becker, Clemens; Palumbo, Pierpaolo; Schwickert, Lars; Rapp, Kilan; Helbostad, Jorunn L; Todd, Chris; Lord, Stephen R; Kerse, Ngaire

    2017-11-01

    Falls are a major cause of injury and disability in older people, leading to serious health and social consequences including fractures, poor quality of life, loss of independence, and institutionalization. To design and provide adequate prevention measures, accurate understanding and identification of person's individual fall risk is important. However, to date, the performance of fall risk models is weak compared with models estimating, for example, cardiovascular risk. This deficiency may result from 2 factors. First, current models consider risk factors to be stable for each person and not change over time, an assumption that does not reflect real-life experience. Second, current models do not consider the interplay of individual exposure including type of activity (eg, walking, undertaking transfers) and environmental risks (eg, lighting, floor conditions) in which activity is performed. Therefore, we posit a dynamic fall risk model consisting of intrinsic risk factors that vary over time and exposure (activity in context). eHealth sensor technology (eg, smartphones) begins to enable the continuous measurement of both the above factors. We illustrate our model with examples of real-world falls from the FARSEEING database. This dynamic framework for fall risk adds important aspects that may improve understanding of fall mechanisms, fall risk models, and the development of fall prevention interventions. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  8. Modeling of cylindrical surrounding gate MOSFETs including the fringing field effects

    International Nuclear Information System (INIS)

    Gupta, Santosh K.; Baishya, Srimanta

    2013-01-01

    A physically based analytical model for surface potential and threshold voltage including the fringing gate capacitances in cylindrical surround gate (CSG) MOSFETs has been developed. Based on this a subthreshold drain current model has also been derived. This model first computes the charge induced in the drain/source region due to the fringing capacitances and considers an effective charge distribution in the cylindrically extended source/drain region for the development of a simple and compact model. The fringing gate capacitances taken into account are outer fringe capacitance, inner fringe capacitance, overlap capacitance, and sidewall capacitance. The model has been verified with the data extracted from 3D TCAD simulations of CSG MOSFETs and was found to be working satisfactorily. (semiconductor devices)

  9. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  10. Wind turbine condition monitoring based on SCADA data using normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar; Achiche, Sofiane

    2013-01-01

    This paper proposes a system for wind turbine condition monitoring using Adaptive Neuro-Fuzzy Interference Systems (ANFIS). For this purpose: (1) ANFIS normal behavior models for common Supervisory Control And Data Acquisition (SCADA) data are developed in order to detect abnormal behavior...... of the captured signals and indicate component malfunctions or faults using the prediction error. 33 different standard SCADA signals are used and described, for which 45 normal behavior models are developed. The performance of these models is evaluated in terms of the prediction error standard deviations to show...... the applicability of ANFIS models for monitoring wind turbine SCADA signals. The computational time needed for model training is compared to Neural Network (NN) models showing the strength of ANFIS in training speed. (2) For automation of fault diagnosis Fuzzy Interference Systems (FIS) are used to analyze...

  11. Normal and Abnormal Scenario Modeling with GoldSim for Radioactive Waste Disposal System

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae

    2010-08-01

    A modeling study and development of a total system performance assessment (TSPA) template program, by which an assessment of safety and performance for the radioactive waste repository with normal and/or abnormal nuclide release cases could be assessed has been carried out by utilizing a commercial development tool program, GoldSim. Scenarios associated with the various FEPs and involved in the performance of the proposed repository in view of nuclide transport and transfer both in the geosphere and biosphere has been also carried out. Selected normal and abnormal scenarios that could alter groundwater flow scheme and then nuclide transport are modeled with the template program. To this end in-depth system models for the normal and abnormal well and earthquake scenarios that are conceptually and rather practically described and then ready for implementing into a GoldSim TSPA template program are introduced with conceptual schemes for each repository system. Illustrative evaluations with data currently available are also shown

  12. Including Effects of Water Stress on Dead Organic Matter Decay to a Forest Carbon Model

    Science.gov (United States)

    Kim, H.; Lee, J.; Han, S. H.; Kim, S.; Son, Y.

    2017-12-01

    Decay of dead organic matter is a key process of carbon (C) cycling in forest ecosystems. The change in decay rate depends on temperature sensitivity and moisture conditions. The Forest Biomass and Dead organic matter Carbon (FBDC) model includes a decay sub-model considering temperature sensitivity, yet does not consider moisture conditions as drivers of the decay rate change. This study aimed to improve the FBDC model by including a water stress function to the decay sub-model. Also, soil C sequestration under climate change with the FBDC model including the water stress function was simulated. The water stress functions were determined with data from decomposition study on Quercus variabilis forests and Pinus densiflora forests of Korea, and adjustment parameters of the functions were determined for both species. The water stress functions were based on the ratio of precipitation to potential evapotranspiration. Including the water stress function increased the explained variances of the decay rate by 19% for the Q. variabilis forests and 7% for the P. densiflora forests, respectively. The increase of the explained variances resulted from large difference in temperature range and precipitation range across the decomposition study plots. During the period of experiment, the mean annual temperature range was less than 3°C, while the annual precipitation ranged from 720mm to 1466mm. Application of the water stress functions to the FBDC model constrained increasing trend of temperature sensitivity under climate change, and thus increased the model-estimated soil C sequestration (Mg C ha-1) by 6.6 for the Q. variabilis forests and by 3.1 for the P. densiflora forests, respectively. The addition of water stress functions increased reliability of the decay rate estimation and could contribute to reducing the bias in estimating soil C sequestration under varying moisture condition. Acknowledgement: This study was supported by Korea Forest Service (2017044B10-1719-BB01)

  13. Longitudinal evaluation of an N-ethyl-N-nitrosourea-created murine model with normal pressure hydrocephalus.

    Directory of Open Access Journals (Sweden)

    Ming-Jen Lee

    Full Text Available BACKGROUND: Normal-pressure hydrocephalus (NPH is a neurodegenerative disorder that usually occurs late in adult life. Clinically, the cardinal features include gait disturbances, urinary incontinence, and cognitive decline. METHODOLOGY/PRINCIPAL FINDINGS: Herein we report the characterization of a novel mouse model of NPH (designated p23-ST1, created by N-ethyl-N-nitrosourea (ENU-induced mutagenesis. The ventricular size in the brain was measured by 3-dimensional micro-magnetic resonance imaging (3D-MRI and was found to be enlarged. Intracranial pressure was measured and was found to fall within a normal range. A histological assessment and tracer flow study revealed that the cerebral spinal fluid (CSF pathway of p23-ST1 mice was normal without obstruction. Motor functions were assessed using a rotarod apparatus and a CatWalk gait automatic analyzer. Mutant mice showed poor rotarod performance and gait disturbances. Cognitive function was evaluated using auditory fear-conditioned responses with the mutant displaying both short- and long-term memory deficits. With an increase in urination frequency and volume, the mutant showed features of incontinence. Nissl substance staining and cell-type-specific markers were used to examine the brain pathology. These studies revealed concurrent glial activation and neuronal loss in the periventricular regions of mutant animals. In particular, chronically activated microglia were found in septal areas at a relatively young age, implying that microglial activation might contribute to the pathogenesis of NPH. These defects were transmitted in an autosomal dominant mode with reduced penetrance. Using a whole-genome scan employing 287 single-nucleotide polymorphic (SNP markers and further refinement using six additional SNP markers and four microsatellite markers, the causative mutation was mapped to a 5.3-cM region on chromosome 4. CONCLUSIONS/SIGNIFICANCE: Our results collectively demonstrate that the p23-ST1

  14. Including an ocean carbon cycle model into iLOVECLIM (v1.0)

    NARCIS (Netherlands)

    Bouttes, N.; Roche, D.M.V.A.P.; Mariotti, V.; Bopp, L.

    2015-01-01

    The atmospheric carbon dioxide concentration plays a crucial role in the radiative balance and as such has a strong influence on the evolution of climate. Because of the numerous interactions between climate and the carbon cycle, it is necessary to include a model of the carbon cycle within a

  15. 108: Modeling of the macro-response of tumors and surrounding normal tissues for the fractionated radiation treatment

    International Nuclear Information System (INIS)

    Van de Geijn, J.

    1987-01-01

    An attempt at modeling the combined macro behavior of tumors and normal tissue functionality as a function of the fractionation parameters, including the volume aspect, is presented. For the immediate single-dose effect both the combined single hit-single target, single hit-multi target concept as well as the linear-quadratic model are optional. The inter-fraction and post-treatment response to radiation damage of the involved normal tissue functionality may be delayed, but is otherwise assumed to be governed, at any time, by the density of remaining viable stem cells and by homeostatic control. Tumors are similarly assumed to be susceptible to single-dose damage and their inter-fraction and post-treatment behavior is assumed to be determined by the number of remaining viable cells. An interactive Fortran77 simulation program has been developed, and some provisional results are included. 4 refs.; 4 figs

  16. Glycoprotein CD44 expression in normal, hyperplasic and neoplastic endometrium. An immunohistochemical study including correlations with p53, steroid receptor status and proliferative indices (PCNA, MIB1).

    Science.gov (United States)

    Zagorianakou, N; Ioachim, E; Mitselou, A; Kitsou, E; Zagorianakou, P; Stefanaki, S; Makrydimas, G; Agnantis, N J

    2003-01-01

    We have studied by immunohistochemistry the presence and localization of CD44, estrogen and progesterone receptors, p53 and proliferative associated indices (MIB1, PCNA) in archival endometrial tissue, in order to determine their diagnostic and prognostic value as well as the possible correlations between them. We examined 186 samples of endometrial tissue (100 endometrial carcinomas of endometrioid type, 40 cases of hyperplasia and 46 of normal endometrium). Patient records were examined for FIGO stage, grade, and depth of myometrial invasion, histology, and lympho-vascular space invasion. Strong membranous immunostaining (> 10% of neoplastic cells) was observed in 45% of the carcinomas. A statistically significant correlation was found in the expression of protein in stromal cells, when compared with epithelial cells (p failed to show any statistical correlation with tumor grade or with vessel invasion. The expression of the protein was lower in FIGO Stage II compared with Stage I (p = 0.03). A positive relation of CD44 expression with progesterone receptor status (p = 0.02) was detected. CD44 expression was also positively associated with the proliferation associated with the proliferative index MIB1 (p = 0.001). CD44 is closely related to the secretory phase of the normal menstrual cycle and its expression is decreased in hyperplasia (simple or complex with or without atypia) and in cancer cases. These observations suggest that decreased CD44 expression might be functionally involved in the multiple mechanisms of the development and progression of endometrial lesions.

  17. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  18. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    International Nuclear Information System (INIS)

    Coolen, F.P.A.

    1997-01-01

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  19. MEMLS3&a: Microwave Emission Model of Layered Snowpacks adapted to include backscattering

    Directory of Open Access Journals (Sweden)

    M. Proksch

    2015-08-01

    Full Text Available The Microwave Emission Model of Layered Snowpacks (MEMLS was originally developed for microwave emissions of snowpacks in the frequency range 5–100 GHz. It is based on six-flux theory to describe radiative transfer in snow including absorption, multiple volume scattering, radiation trapping due to internal reflection and a combination of coherent and incoherent superposition of reflections between horizontal layer interfaces. Here we introduce MEMLS3&a, an extension of MEMLS, which includes a backscatter model for active microwave remote sensing of snow. The reflectivity is decomposed into diffuse and specular components. Slight undulations of the snow surface are taken into account. The treatment of like- and cross-polarization is accomplished by an empirical splitting parameter q. MEMLS3&a (as well as MEMLS is set up in a way that snow input parameters can be derived by objective measurement methods which avoid fitting procedures of the scattering efficiency of snow, required by several other models. For the validation of the model we have used a combination of active and passive measurements from the NoSREx (Nordic Snow Radar Experiment campaign in Sodankylä, Finland. We find a reasonable agreement between the measurements and simulations, subject to uncertainties in hitherto unmeasured input parameters of the backscatter model. The model is written in Matlab and the code is publicly available for download through the following website: http://www.iapmw.unibe.ch/research/projects/snowtools/memls.html.

  20. TS Fuzzy Model-Based Controller Design for a Class of Nonlinear Systems Including Nonsmooth Functions

    DEFF Research Database (Denmark)

    Vafamand, Navid; Asemani, Mohammad Hassan; Khayatiyan, Alireza

    2018-01-01

    criterion, new robust controller design conditions in terms of linear matrix inequalities are derived. Three practical case studies, electric power steering system, a helicopter model and servo-mechanical system, are presented to demonstrate the importance of such class of nonlinear systems comprising......This paper proposes a novel robust controller design for a class of nonlinear systems including hard nonlinearity functions. The proposed approach is based on Takagi-Sugeno (TS) fuzzy modeling, nonquadratic Lyapunov function, and nonparallel distributed compensation scheme. In this paper, a novel...... TS modeling of the nonlinear dynamics with signum functions is proposed. This model can exactly represent the original nonlinear system with hard nonlinearity while the discontinuous signum functions are not approximated. Based on the bounded-input-bounded-output stability scheme and L₁ performance...

  1. A roller chain drive model including contact with guide-bars

    DEFF Research Database (Denmark)

    Pedersen, Sine Leergaard; Hansen, John Michael; Ambrósio, J. A. C.

    2004-01-01

    as continuous force. The model of the roller-chain drive now proposed departs from an earlier model where two contact/impact methods are proposed to describe the contact between the rollers of the chain and the teeth of the sprockets. These different formulations are based on unilateral constraints....... In the continuous force method the roller-sprocket contact, is represented by forces applied on each seated roller and in the respective sprocket teeth. These forces are functions of the pseudo penetrations between roller and sprocket, impacting velocities and a restitution coefficient. In the continuous force......A model of a roller chain drive is developed and applied to the simulation and analysis of roller chain drives of large marine diesel engines. The model includes the impact with guide-bars that are the motion delimiter components on the chain strands between the sprockets. The main components...

  2. Prospects for genetically modified non-human primate models, including the common marmoset.

    Science.gov (United States)

    Sasaki, Erika

    2015-04-01

    Genetically modified mice have contributed much to studies in the life sciences. In some research fields, however, mouse models are insufficient for analyzing the molecular mechanisms of pathology or as disease models. Often, genetically modified non-human primate (NHP) models are desired, as they are more similar to human physiology, morphology, and anatomy. Recent progress in studies of the reproductive biology in NHPs has enabled the introduction of exogenous genes into NHP genomes or the alteration of endogenous NHP genes. This review summarizes recent progress in the production of genetically modified NHPs, including the common marmoset, and future perspectives for realizing genetically modified NHP models for use in life sciences research. Copyright © 2015 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  3. HPC in global geo dynamics: Advances in normal-mode analytical modelling

    International Nuclear Information System (INIS)

    Melini, D.

    2009-01-01

    Analytical models based on normal-mode theory have been successfully employed for decades in the modeling of global response of the Earth to seismic dislocations, post glacial rebound and wave propagation. Despite their limited capabilities with respect to fully numerical approaches, they are yet a valuable modeling tool, for instance in benchmarking applications or when automated procedures have to be implemented, as in massive inversion problems when a large number of forward models have to be solved. The availability of high-performance computer systems ignited new applications for analytical modeling, allowing to re- move limiting approximations and to carry out extensive simulations on large global datasets.

  4. Improving weather predictability by including land-surface model parameter uncertainty

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  5. Molecular dynamics study of lipid bilayers modeling the plasma membranes of normal murine thymocytes and leukemic GRSL cells.

    Science.gov (United States)

    Andoh, Yoshimichi; Okazaki, Susumu; Ueoka, Ryuichi

    2013-04-01

    Molecular dynamics (MD) calculations for the plasma membranes of normal murine thymocytes and thymus-derived leukemic GRSL cells in water have been performed under physiological isothermal-isobaric conditions (310.15K and 1 atm) to investigate changes in membrane properties induced by canceration. The model membranes used in our calculations for normal and leukemic thymocytes comprised 23 and 25 kinds of lipids, respectively, including phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, lysophospholipids, and cholesterol. The mole fractions of the lipids adopted here were based on previously published experimental values. Our calculations clearly showed that the membrane area was increased in leukemic cells, and that the isothermal area compressibility of the leukemic plasma membranes was double that of normal cells. The calculated membranes of leukemic cells were thus considerably bulkier and softer in the lateral direction compared with those of normal cells. The tilt angle of the cholesterol and the conformation of the phospholipid fatty acid tails both showed a lower level of order in leukemic cell membranes compared with normal cell membranes. The lateral radial distribution function of the lipids also showed a more disordered structure in leukemic cell membranes than in normal cell membranes. These observations all show that, for the present thymocytes, the lateral structure of the membrane is considerably disordered by canceration. Furthermore, the calculated lateral self-diffusion coefficient of the lipid molecules in leukemic cell membranes was almost double that in normal cell membranes. The calculated rotational and wobbling autocorrelation functions also indicated that the molecular motion of the lipids was enhanced in leukemic cell membranes. Thus, here we have demonstrated that the membranes of thymocyte leukemic cells are more disordered and more fluid than normal cell membranes. Copyright © 2013

  6. Global Reference Atmospheric Models, Including Thermospheres, for Mars, Venus and Earth

    Science.gov (United States)

    Justh, Hilary L.; Justus, C. G.; Keller, Vernon W.

    2006-01-01

    This document is the viewgraph slides of the presentation. Marshall Space Flight Center's Natural Environments Branch has developed Global Reference Atmospheric Models (GRAMs) for Mars, Venus, Earth, and other solar system destinations. Mars-GRAM has been widely used for engineering applications including systems design, performance analysis, and operations planning for aerobraking, entry descent and landing, and aerocapture. Preliminary results are presented, comparing Mars-GRAM with measurements from Mars Reconnaissance Orbiter (MRO) during its aerobraking in Mars thermosphere. Venus-GRAM is based on the Committee on Space Research (COSPAR) Venus International Reference Atmosphere (VIRA), and is suitable for similar engineering applications in the thermosphere or other altitude regions of the atmosphere of Venus. Until recently, the thermosphere in Earth-GRAM has been represented by the Marshall Engineering Thermosphere (MET) model. Earth-GRAM has recently been revised. In addition to including an updated version of MET, it now includes an option to use the Naval Research Laboratory Mass Spectrometer Incoherent Scatter Radar Extended Model (NRLMSISE-00) as an alternate thermospheric model. Some characteristics and results from Venus-GRAM and Earth-GRAM thermospheres are also presented.

  7. A numerical model including PID control of a multizone crystal growth furnace

    Science.gov (United States)

    Panzarella, Charles H.; Kassemi, Mohammad

    1992-01-01

    This paper presents a 2D axisymmetric combined conduction and radiation model of a multizone crystal growth furnace. The model is based on a programmable multizone furnace (PMZF) designed and built at NASA Lewis Research Center for growing high quality semiconductor crystals. A novel feature of this model is a control algorithm which automatically adjusts the power in any number of independently controlled heaters to establish the desired crystal temperatures in the furnace model. The control algorithm eliminates the need for numerous trial and error runs previously required to obtain the same results. The finite element code, FIDAP, used to develop the furnace model, was modified to directly incorporate the control algorithm. This algorithm, which presently uses PID control, and the associated heat transfer model are briefly discussed. Together, they have been used to predict the heater power distributions for a variety of furnace configurations and desired temperature profiles. Examples are included to demonstrate the effectiveness of the PID controlled model in establishing isothermal, Bridgman, and other complicated temperature profies in the sample. Finally, an example is given to show how the algorithm can be used to change the desired profile with time according to a prescribed temperature-time evolution.

  8. A New Circuit Model for Spin-Torque Oscillator Including Perpendicular Torque of Magnetic Tunnel Junction

    Directory of Open Access Journals (Sweden)

    Hyein Lim

    2013-01-01

    Full Text Available Spin-torque oscillator (STO is a promising new technology for the future RF oscillators, which is based on the spin-transfer torque (STT effect in magnetic multilayered nanostructure. It is expected to provide a larger tunability, smaller size, lower power consumption, and higher level of integration than the semiconductor-based oscillators. In our previous work, a circuit-level model of the giant magnetoresistance (GMR STO was proposed. In this paper, we present a physics-based circuit-level model of the magnetic tunnel junction (MTJ-based STO. MTJ-STO model includes the effect of perpendicular torque that has been ignored in the GMR-STO model. The variations of three major characteristics, generation frequency, mean oscillation power, and generation linewidth of an MTJ-STO with respect to the amount of perpendicular torque, are investigated, and the results are applied to our model. The operation of the model was verified by HSPICE simulation, and the results show an excellent agreement with the experimental data. The results also prove that a full circuit-level simulation with MJT-STO devices can be made with our proposed model.

  9. Irrigation with potable water versus normal saline in a contaminated musculoskeletal wound model.

    Science.gov (United States)

    Svoboda, Steven J; Owens, Brett D; Gooden, Heather A; Melvin, Michael L; Baer, David G; Wenke, Joseph C

    2008-05-01

    Although the use of potable water for wound irrigation is attractive in an austere environment, its effectiveness has not been tested. We sought to compare the effectiveness of potable water irrigation in reducing bacterial number with that of normal saline irrigation. We used an established caprine model involving the creation of a reproducible complex musculoskeletal wound followed by inoculation with luminescent bacteria that allowed for quantitative analysis with a photon-counting camera system. Six hours after injury and inoculation, wound irrigations were performed using pulsatile lavage. Fourteen goats were randomized into two treatment groups: irrigation with 9 L potable water versus irrigation with 9 L normal saline. Images obtained after irrigation were compared with baseline images to determine the reduction in bacterial luminescence resulting from treatment. The irrigation in both groups reduced the bacterial counts by 71% from the preirrigation levels. Potable water reduced the bacterial load as effectively as normal saline in our model.

  10. Including Finite Surface Span Effects in Empirical Jet-Surface Interaction Noise Models

    Science.gov (United States)

    Brown, Clifford A.

    2016-01-01

    The effect of finite span on the jet-surface interaction noise source and the jet mixing noise shielding and reflection effects is considered using recently acquired experimental data. First, the experimental setup and resulting data are presented with particular attention to the role of surface span on far-field noise. These effects are then included in existing empirical models that have previously assumed that all surfaces are semi-infinite. This extended abstract briefly describes the experimental setup and data leaving the empirical modeling aspects for the final paper.

  11. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cases......- udders relative to SCS from IMI+ udders. Further, the genetic correlation between SCS of IMI- and SCS of IMI+ was 0.61, and heritability for liability to putative mastitis was 0.07. Models B2 and C allocated approximately 30% of SCS records to IMI+, but for model B1 this fraction was only 10......%. The correlation between estimated breeding values for liability to putative mastitis based on the model (SCS for model A) and estimated breeding values for liability to clinical mastitis from the national evaluation was greatest for model B1, followed by models A, C, and B2. This may be explained by model B1...

  12. Understanding the implementation of complex interventions in health care: the normalization process model

    Directory of Open Access Journals (Sweden)

    Rogers Anne

    2007-09-01

    Full Text Available Abstract Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration. Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model.

  13. Hippocampal and behavioral dysfunctions in a mouse model of environmental stress: normalization by agomelatine

    Science.gov (United States)

    Boulle, F; Massart, R; Stragier, E; Païzanis, E; Zaidan, L; Marday, S; Gabriel, C; Mocaer, E; Mongeau, R; Lanfumey, L

    2014-01-01

    Stress-induced alterations in neuronal plasticity and in hippocampal functions have been suggested to be involved in the development of mood disorders. In this context, we investigated in the hippocampus the activation of intracellular signaling cascades, the expression of epigenetic markers and plasticity-related genes in a mouse model of stress-induced hyperactivity and of mixed affective disorders. We also determined whether the antidepressant drug agomelatine, a MT1/MT2 melatonergic receptor agonist/5-HT2C receptor antagonist, could prevent some neurobiological and behavioral alterations produced by stress. C57BL/6J mice, exposed for 3 weeks to daily unpredictable socio-environmental stressors of mild intensity, were treated during the whole procedure with agomelatine (50 mg kg−1 per day, intraperitoneal). Stressed mice displayed robust increases in emotional arousal, vigilance and motor activity, together with a reward deficit and a reduction in anxiety-like behavior. Neurobiological investigations showed an increased phosphorylation of intracellular signaling proteins, including Atf1, Creb and p38, in the hippocampus of stressed mice. Decreased hippocampal level of the repressive epigenetic marks HDAC2 and H3K9me2, as well as increased level of the permissive mark H3K9/14ac suggested that chronic mild stress was associated with increased gene transcription, and clear-cut evidence was further indicated by changes in neuroplasticity-related genes, including Arc, Bcl2, Bdnf, Gdnf, Igf1 and Neurod1. Together with other findings, the present data suggest that chronic ultra-mild stress can model the hyperactivity or psychomotor agitation, as well as the mixed affective behaviors often observed during the manic state of bipolar disorder patients. Interestingly, agomelatine could normalize both the behavioral and the molecular alterations induced by stress, providing further insights into the mechanism of action of this new generation antidepressant drug. PMID

  14. Valproic acid prevents retinal degeneration in a murine model of normal tension glaucoma.

    Science.gov (United States)

    Kimura, Atsuko; Guo, Xiaoli; Noro, Takahiko; Harada, Chikako; Tanaka, Kohichi; Namekata, Kazuhiko; Harada, Takayuki

    2015-02-19

    Valproic acid (VPA) is widely used for treatment of epilepsy, mood disorders, migraines and neuropathic pain. It exerts its therapeutic benefits through modulation of multiple mechanisms including regulation of gamma-aminobutyric acid and glutamate neurotransmissions, activation of pro-survival protein kinases and inhibition of histone deacetylase. The evidence for neuroprotective properties associated with VPA is emerging. Herein, we investigated the therapeutic potential of VPA in a mouse model of normal tension glaucoma (NTG). Mice with glutamate/aspartate transporter gene deletion (GLAST KO mice) demonstrate progressive retinal ganglion cell (RGC) loss and optic nerve degeneration without elevated intraocular pressure, and exhibit glaucomatous pathology including glutamate neurotoxicity and oxidative stress in the retina. VPA (300mg/kg) or vehicle (PBS) was administered via intraperitoneal injection in GLAST KO mice daily for 2 weeks from the age of 3 weeks, which coincides with the onset of glaucomatous retinal degeneration. Following completion of the treatment period, the vehicle-treated GLAST KO mouse retina showed significant RGC death. Meanwhile, VPA treatment prevented RGC death and thinning of the inner retinal layer in GLAST KO mice. In addition, in vivo electrophysiological analyses demonstrated that visual impairment observed in vehicle-treated GLAST KO mice was ameliorated with VPA treatment, clearly establishing that VPA beneficially affects both histological and functional aspects of the glaucomatous retina. We found that VPA reduces oxidative stress induced in the GLAST KO retina and stimulates the cell survival signalling pathway associated with extracellular-signal-regulated kinases (ERK). This is the first study to report the neuroprotective effects of VPA in an animal model of NTG. Our findings raise intriguing possibilities that the widely prescribed drug VPA may be a novel candidate for treatment of glaucoma. Copyright © 2015 Elsevier

  15. Asymptotic normality of conditional distribution estimation in the single index model

    Directory of Open Access Journals (Sweden)

    Hamdaoui Diaa Eddine

    2017-08-01

    Full Text Available This paper deals with the estimation of conditional distribution function based on the single-index model. The asymptotic normality of the conditional distribution estimator is established. Moreover, as an application, the asymptotic (1 − γ confidence interval of the conditional distribution function is given for 0 < γ < 1.

  16. Asymptotic normality of conditional distribution estimation in the single index model

    OpenAIRE

    Hamdaoui Diaa Eddine; Bouchentouf Amina Angelika; Rabhi Abbes; Guendouzi Toufik

    2017-01-01

    This paper deals with the estimation of conditional distribution function based on the single-index model. The asymptotic normality of the conditional distribution estimator is established. Moreover, as an application, the asymptotic (1 − γ) confidence interval of the conditional distribution function is given for 0 < γ < 1.

  17. Normalization Regression Estimation With Application to a Nonorthogonal, Nonrecursive Model of School Learning.

    Science.gov (United States)

    Bulcock, J. W.; And Others

    Advantages of normalization regression estimation over ridge regression estimation are demonstrated by reference to Bloom's model of school learning. Theoretical concern centered on the structure of scholastic achievement at grade 10 in Canadian high schools. Data on 886 students were randomly sampled from the Carnegie Human Resources Data Bank.…

  18. Wave propagation simulation in normal and infarcted myocardium: computational and modelling issues

    NARCIS (Netherlands)

    Maglaveras, N.; van Capelle, F. J.; de Bakker, J. M.

    1998-01-01

    Simulation of propagating action potentials (PAP) in normal and abnormal myocardium is used for the understanding of mechanisms responsible for eliciting dangerous arrhythmias. One- and two-dimensional models dealing with PAP properties are reviewed in this paper viewed both from the computational

  19. Modeling of Temperature-Dependent Noise in Silicon Nanowire FETs including Self-Heating Effects

    Directory of Open Access Journals (Sweden)

    P. Anandan

    2014-01-01

    Full Text Available Silicon nanowires are leading the CMOS era towards the downsizing limit and its nature will be effectively suppress the short channel effects. Accurate modeling of thermal noise in nanowires is crucial for RF applications of nano-CMOS emerging technologies. In this work, a perfect temperature-dependent model for silicon nanowires including the self-heating effects has been derived and its effects on device parameters have been observed. The power spectral density as a function of thermal resistance shows significant improvement as the channel length decreases. The effects of thermal noise including self-heating of the device are explored. Moreover, significant reduction in noise with respect to channel thermal resistance, gate length, and biasing is analyzed.

  20. Producing high-accuracy lattice models from protein atomic coordinates including side chains.

    Science.gov (United States)

    Mann, Martin; Saunders, Rhodri; Smith, Cameron; Backofen, Rolf; Deane, Charlotte M

    2012-01-01

    Lattice models are a common abstraction used in the study of protein structure, folding, and refinement. They are advantageous because the discretisation of space can make extensive protein evaluations computationally feasible. Various approaches to the protein chain lattice fitting problem have been suggested but only a single backbone-only tool is available currently. We introduce LatFit, a new tool to produce high-accuracy lattice protein models. It generates both backbone-only and backbone-side-chain models in any user defined lattice. LatFit implements a new distance RMSD-optimisation fitting procedure in addition to the known coordinate RMSD method. We tested LatFit's accuracy and speed using a large nonredundant set of high resolution proteins (SCOP database) on three commonly used lattices: 3D cubic, face-centred cubic, and knight's walk. Fitting speed compared favourably to other methods and both backbone-only and backbone-side-chain models show low deviation from the original data (~1.5 Å RMSD in the FCC lattice). To our knowledge this represents the first comprehensive study of lattice quality for on-lattice protein models including side chains while LatFit is the only available tool for such models.

  1. A High-Rate, Single-Crystal Model including Phase Transformations, Plastic Slip, and Twinning

    Energy Technology Data Exchange (ETDEWEB)

    Addessio, Francis L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bronkhorst, Curt Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bolme, Cynthia Anne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Explosive Science and Shock Physics Division; Brown, Donald William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Cerreta, Ellen Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lebensohn, Ricardo A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lookman, Turab [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Mayeur, Jason Rhea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Morrow, Benjamin M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Rigg, Paulo A. [Washington State Univ., Pullman, WA (United States). Dept. of Physics. Inst. for Shock Physics

    2016-08-09

    An anisotropic, rate-­dependent, single-­crystal approach for modeling materials under the conditions of high strain rates and pressures is provided. The model includes the effects of large deformations, nonlinear elasticity, phase transformations, and plastic slip and twinning. It is envisioned that the model may be used to examine these coupled effects on the local deformation of materials that are subjected to ballistic impact or explosive loading. The model is formulated using a multiplicative decomposition of the deformation gradient. A plate impact experiment on a multi-­crystal sample of titanium was conducted. The particle velocities at the back surface of three crystal orientations relative to the direction of impact were measured. Molecular dynamics simulations were conducted to investigate the details of the high-­rate deformation and pursue issues related to the phase transformation for titanium. Simulations using the single crystal model were conducted and compared to the high-­rate experimental data for the impact loaded single crystals. The model was found to capture the features of the experiments.

  2. Modeling and analysis of the affinity filtration process, including broth feeding, washing, and elution steps.

    Science.gov (United States)

    He, L Z; Dong, X Y; Sun, Y

    1998-01-01

    Affinity filtration is a developing protein purification technique that combines the high selectivity of affinity chromatography and the high processing speed of membrane filtration. In this work a lumped kinetic model was developed to describe the whole affinity filtration process, including broth feeding, contaminant washing, and elution steps. Affinity filtration experiments were conducted to evaluate the model using bovine serum albumin as a model protein and a highly substituted Blue Sepharose as an affinity adsorbent. The model with nonadjustable parameters agreed fairly to the experimental results. Thus, the performance of the affinity filtration in processing a crude broth containing contaminant proteins was analyzed by computer simulations using the lumped model. The simulation results show that there is an optimal protein loading for obtaining the maximum recovery yield of the desired protein with a constant purity at each operating condition. The concentration of a crude broth is beneficial in increasing the recovery yield of the desired protein. Using a constant amount of the affinity adsorbent, the recovery yield can be enhanced by decreasing the solution volume in the stirred tank due to the increase of the adsorbent weight fraction. It was found that the lumped kinetic model was simple and useful in analyzing the whole affinity filtration process.

  3. Model for safety reports including descriptive examples; Mall foer saekerhetsrapporter med beskrivande exempel

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-01

    Several safety reports will be produced in the process of planning and constructing the system for disposal of high-level radioactive waste in Sweden. The present report gives a model, with detailed examples, of how these reports should be organized and what steps they should include. In the near future safety reports will deal with the encapsulation plant and the repository. Later reports will treat operation of the handling systems and the repository.

  4. Collisional-radiative model including recombination processes for W27+ ion★

    Science.gov (United States)

    Murakami, Izumi; Sasaki, Akira; Kato, Daiji; Koike, Fumihiro

    2017-10-01

    We have constructed a collisional-radiative (CR) model for W27+ ions including 226 configurations with n ≤ 9 and ł ≤ 5 for spectroscopic diagnostics. We newly include recombination processes in the model and this is the first result of extreme ultraviolet spectrum calculated for recombining plasma component. Calculated spectra in 40-70 Å range in ionizing and recombining plasma components show similar 3 strong lines and 1 line weak in recombining plasma component at 45-50 Å and many weak lines at 50-65 Å for both components. Recombination processes do not contribute much to the spectrum at around 60 Å for W27+ ion. Dielectronic satellite lines are also minor contribution to the spectrum of recombining plasma component. Dielectronic recombination (DR) rate coefficient from W28+ to W27+ ions is also calculated with the same atomic data in the CR model. We found that larger set of energy levels including many autoionizing states gave larger DR rate coefficients but our rate agree within factor 6 with other works at electron temperature around 1 keV in which W27+ and W28+ ions are usually observed in plasmas. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, and Grzegorz Karwasz.

  5. A 3D model of the oculomotor plant including the pulley system

    Science.gov (United States)

    Viegener, A.; Armentano, R. L.

    2007-11-01

    Early models of the oculomotor plant only considered the eye globes and the muscles that move them. Recently, connective tissue structures have been found enveloping the extraocular muscles (EOMs) and firmly anchored to the orbital wall. These structures act as pulleys; they determine the functional origin of the EOMs and, in consequence, their effective pulling direction. A three dimensional model of the oculomotor plant, including pulleys, has been developed and simulations in Simulink were performed during saccadic eye movements. Listing's law was implemented based on the supposition that there exists an eye orientation related signal. The inclusion of the pulleys in the model makes this assumption plausible and simplifies the problem of the plant noncommutativity.

  6. Double-gate junctionless transistor model including short-channel effects

    International Nuclear Information System (INIS)

    Paz, B C; Pavanello, M A; Ávila-Herrera, F; Cerdeira, A

    2015-01-01

    This work presents a physically based model for double-gate junctionless transistors (JLTs), continuous in all operation regimes. To describe short-channel transistors, short-channel effects (SCEs), such as increase of the channel potential due to drain bias, carrier velocity saturation and mobility degradation due to vertical and longitudinal electric fields, are included in a previous model developed for long-channel double-gate JLTs. To validate the model, an analysis is made by using three-dimensional numerical simulations performed in a Sentaurus Device Simulator from Synopsys. Different doping concentrations, channel widths and channel lengths are considered in this work. Besides that, the series resistance influence is numerically included and validated for a wide range of source and drain extensions. In order to check if the SCEs are appropriately described, besides drain current, transconductance and output conductance characteristics, the following parameters are analyzed to demonstrate the good agreement between model and simulation and the SCEs occurrence in this technology: threshold voltage (V TH ), subthreshold slope (S) and drain induced barrier lowering. (paper)

  7. Modeling the distribution of ammonia across Europe including bi-directional surface–atmosphere exchange

    Directory of Open Access Journals (Sweden)

    R. J. Wichink Kruit

    2012-12-01

    Full Text Available A large shortcoming of current chemistry transport models (CTM for simulating the fate of ammonia in the atmosphere is the lack of a description of the bi-directional surface–atmosphere exchange. In this paper, results of an update of the surface–atmosphere exchange module DEPAC, i.e. DEPosition of Acidifying Compounds, in the chemistry transport model LOTOS-EUROS are discussed. It is shown that with the new description, which includes bi-directional surface–atmosphere exchange, the modeled ammonia concentrations increase almost everywhere, in particular in agricultural source areas. The reason is that by using a compensation point the ammonia lifetime and transport distance is increased. As a consequence, deposition of ammonia and ammonium decreases in agricultural source areas, while it increases in large nature areas and remote regions especially in southern Scandinavia. The inclusion of a compensation point for water reduces the dry deposition over sea and allows reproducing the observed marine background concentrations at coastal locations to a better extent. A comparison with measurements shows that the model results better represent the measured ammonia concentrations. The concentrations in nature areas are slightly overestimated, while the concentrations in agricultural source areas are still underestimated. Although the introduction of the compensation point improves the model performance, the modeling of ammonia remains challenging. Important aspects are emission patterns in space and time as well as a proper approach to deal with the high concentration gradients in relation to model resolution. In short, the inclusion of a bi-directional surface–atmosphere exchange is a significant step forward for modeling ammonia.

  8. Including policy and management in socio-hydrology models: initial conceptualizations

    Science.gov (United States)

    Hermans, Leon; Korbee, Dorien

    2017-04-01

    Socio-hydrology studies the interactions in coupled human-water systems. So far, the use of dynamic models that capture the direct feedback between societal and hydrological systems has been dominant. What has not yet been included with any particular emphasis, is the policy or management layer, which is a central element in for instance integrated water resources management (IWRM) or adaptive delta management (ADM). Studying the direct interactions between human-water systems generates knowledges that eventually helps influence these interactions in ways that may ensure better outcomes - for society and for the health and sustainability of water systems. This influence sometimes occurs through spontaneous emergence, uncoordinated by societal agents - private sector, citizens, consumers, water users. However, the term 'management' in IWRM and ADM also implies an additional coordinated attempt through various public actors. This contribution is a call to include the policy and management dimension more prominently into the research focus of the socio-hydrology field, and offers first conceptual variables that should be considered in attempts to include this policy or management layer in socio-hydrology models. This is done by drawing on existing frameworks to study policy processes throughout both planning and implementation phases. These include frameworks such as the advocacy coalition framework, collective learning and policy arrangements, which all emphasis longer-term dynamics and feedbacks between actor coalitions in strategic planning and implementation processes. A case about longter-term dynamics in the management of the Haringvliet in the Netherlands is used to illustrate the paper.

  9. Landau quantized dynamics and spectra for group-VI dichalcogenides, including a model quantum wire

    Science.gov (United States)

    Horing, Norman J. M.

    2017-06-01

    This work is concerned with the derivation of the Green's function for Landau-quantized carriers in the Group-VI dichalcogenides. In the spatially homogeneous case, the Green's function is separated into a Peierls phase factor and a translationally invariant part which is determined in a closed form integral representation involving only elementary functions. The latter is expanded in an eigenfunction series of Laguerre polynomials. These results for the retarded Green's function are presented in both position and momentum representations, and yet another closed form representation is derived in circular coordinates in terms of the Bessel wave function of the second kind (not to be confused with the Bessel function). The case of a quantum wire is also addressed, representing the quantum wire in terms of a model one-dimensional δ (x ) -potential profile. This retarded Green's function for propagation directly along the wire is determined exactly in terms of the corresponding Green's function for the system without the δ (x ) -potential, and the Landau quantized eigenenergy dispersion relation is examined. The thermodynamic Green's function for the dichalcogenide carriers in a normal magnetic field is formulated here in terms of its spectral weight, and its solution is presented in a momentum/integral representation involving only elementary functions, which is subsequently expanded in Laguerre eigenfunctions and presented in both momentum and position representations.

  10. Landau quantized dynamics and spectra for group-VI dichalcogenides, including a model quantum wire

    Directory of Open Access Journals (Sweden)

    Norman J. M. Horing

    2017-06-01

    Full Text Available This work is concerned with the derivation of the Green’s function for Landau-quantized carriers in the Group-VI dichalcogenides. In the spatially homogeneous case, the Green’s function is separated into a Peierls phase factor and a translationally invariant part which is determined in a closed form integral representation involving only elementary functions. The latter is expanded in an eigenfunction series of Laguerre polynomials. These results for the retarded Green’s function are presented in both position and momentum representations, and yet another closed form representation is derived in circular coordinates in terms of the Bessel wave function of the second kind (not to be confused with the Bessel function. The case of a quantum wire is also addressed, representing the quantum wire in terms of a model one-dimensional δ(x-potential profile. This retarded Green’s function for propagation directly along the wire is determined exactly in terms of the corresponding Green’s function for the system without the δ(x-potential, and the Landau quantized eigenenergy dispersion relation is examined. The thermodynamic Green’s function for the dichalcogenide carriers in a normal magnetic field is formulated here in terms of its spectral weight, and its solution is presented in a momentum/integral representation involving only elementary functions, which is subsequently expanded in Laguerre eigenfunctions and presented in both momentum and position representations.

  11. Development and Implementation of Mechanistic Terry Turbine Models in RELAP-7 to Simulate RCIC Normal Operation Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Hongbin [Idaho National Lab. (INL), Idaho Falls, ID (United States); O' Brien, James Edward [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inlet velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input

  12. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times

    NARCIS (Netherlands)

    Molenaar, D.; Bolsinova, M.

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity

  13. Modelling of Water Cooled Fuel Including Design Basis and Severe Accidents. Proceedings of a Technical Meeting

    International Nuclear Information System (INIS)

    2015-11-01

    The demands on nuclear fuel have recently been increasing, and include transient regimes, higher discharge burnup and longer fuel cycles. This has resulted in an increase of loads on fuel and core internals. In order to satisfy these demands while ensuring compliance with safety criteria, new national and international programmes have been launched and advanced modelling codes are being developed. The Fukushima Daiichi accident has particularly demonstrated the need for adequate analysis of all aspects of fuel performance to prevent a failure and also to predict fuel behaviour were an accident to occur.This publication presents the Proceedings of the Technical Meeting on Modelling of Water Cooled Fuel Including Design Basis and Severe Accidents, which was hosted by the Nuclear Power Institute of China (NPIC) in Chengdu, China, following the recommendation made in 2013 at the IAEA Technical Working Group on Fuel Performance and Technology. This recommendation was in agreement with IAEA mid-term initiatives, linked to the post-Fukushima IAEA Nuclear Safety Action Plan, as well as the forthcoming Coordinated Research Project (CRP) on Fuel Modelling in Accident Conditions. At the technical meeting in Chengdu, major areas and physical phenomena, as well as types of code and experiment to be studied and used in the CRP, were discussed. The technical meeting provided a forum for international experts to review the state of the art of code development for modelling fuel performance of nuclear fuel for water cooled reactors with regard to steady state and transient conditions, and for design basis and early phases of severe accidents, including experimental support for code validation. A round table discussion focused on the needs and perspectives on fuel modelling in accident conditions. This meeting was the ninth in a series of IAEA meetings, which reflects Member States’ continuing interest in nuclear fuel issues. The previous meetings were held in 1980 (jointly with

  14. Health Promotion Behavior of Chinese International Students in Korea Including Acculturation Factors: A Structural Equation Model.

    Science.gov (United States)

    Kim, Sun Jung; Yoo, Il Young

    2016-03-01

    The purpose of this study was to explain the health promotion behavior of Chinese international students in Korea using a structural equation model including acculturation factors. A survey using self-administered questionnaires was employed. Data were collected from 272 Chinese students who have resided in Korea for longer than 6 months. The data were analyzed using structural equation modeling. The p value of final model is .31. The fitness parameters of the final model such as goodness of fit index, adjusted goodness of fit index, normed fit index, non-normed fit index, and comparative fit index were more than .95. Root mean square of residual and root mean square error of approximation also met the criteria. Self-esteem, perceived health status, acculturative stress and acculturation level had direct effects on health promotion behavior of the participants and the model explained 30.0% of variance. The Chinese students in Korea with higher self-esteem, perceived health status, acculturation level, and lower acculturative stress reported higher health promotion behavior. The findings can be applied to develop health promotion strategies for this population. Copyright © 2016. Published by Elsevier B.V.

  15. Characterisation of non-Gaussian fluctuations in multiplicative log-normal models

    Science.gov (United States)

    Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu

    2007-07-01

    Within the general framework of multiplicative log-normal models, we propose methods to characterise non-Gaussian and intermittent fluctuations, and study basic characteristics of non-Gaussian stochastic processes displaying slow convergence to a Gaussian with an increasing coarse-grained level of the time series. Here the multiplicative log-normal model stands for a stochastic process described by the multiplication of Gaussian and log-normally distributed variables. In other words, using two Gaussian variables, ξ and ω, the time series {xi} of this process can be described as xi = ξi expωi. Depending on the variance of ω, λ2, the probability density function (PDF) of x exhibits a non-Gaussian shape. As the non-Gaussianity parameter λ2 increases, the non-Gaussian tails become fatter. On the other hand, when λ2 → 0, the PDF converges to a Gaussian distribution. For the purpose of estimating the non-Gaussianity parameter λ2 from the observed time series, we evaluate a novel method based on analytical expressions of the absolute moments for the multiplicative log-normal models.

  16. A structural model for the in vivo human cornea including collagen-swelling interaction.

    Science.gov (United States)

    Cheng, Xi; Petsche, Steven J; Pinsky, Peter M

    2015-08-06

    A structural model of the in vivo cornea, which accounts for tissue swelling behaviour, for the three-dimensional organization of stromal fibres and for collagen-swelling interaction, is proposed. Modelled as a binary electrolyte gel in thermodynamic equilibrium, the stromal electrostatic free energy is based on the mean-field approximation. To account for active endothelial ionic transport in the in vivo cornea, which modulates osmotic pressure and hydration, stromal mobile ions are shown to satisfy a modified Boltzmann distribution. The elasticity of the stromal collagen network is modelled based on three-dimensional collagen orientation probability distributions for every point in the stroma obtained by synthesizing X-ray diffraction data for azimuthal angle distributions and second harmonic-generated image processing for inclination angle distributions. The model is implemented in a finite-element framework and employed to predict free and confined swelling of stroma in an ionic bath. For the in vivo cornea, the model is used to predict corneal swelling due to increasing intraocular pressure (IOP) and is adapted to model swelling in Fuchs' corneal dystrophy. The biomechanical response of the in vivo cornea to a typical LASIK surgery for myopia is analysed, including tissue fluid pressure and swelling responses. The model provides a new interpretation of the corneal active hydration control (pump-leak) mechanism based on osmotic pressure modulation. The results also illustrate the structural necessity of fibre inclination in stabilizing the corneal refractive surface with respect to changes in tissue hydration and IOP. © 2015 The Author(s).

  17. Application of the Oral Minimal Model to Korean Subjects with Normal Glucose Tolerance and Type 2 Diabetes Mellitus

    OpenAIRE

    Lim, Min Hyuk; Oh, Tae Jung; Choi, Karam; Lee, Jung Chan; Cho, Young Min; Kim, Sungwan

    2016-01-01

    Background The oral minimal model is a simple, useful tool for the assessment of ?-cell function and insulin sensitivity across the spectrum of glucose tolerance, including normal glucose tolerance (NGT), prediabetes, and type 2 diabetes mellitus (T2DM) in humans. Methods Plasma glucose, insulin, and C-peptide levels were measured during a 180-minute, 75-g oral glucose tolerance test in 24 Korean subjects with NGT (n=10) and T2DM (n=14). The parameters in the computational model were estimate...

  18. Condition monitoring with wind turbine SCADA data using Neuro-Fuzzy normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2012-01-01

    System (ANFIS) models are employed to learn the normal behavior in a training phase, where the component condition can be considered healthy. In the application phase the trained models are applied to predict the target signals, e.g. temperatures, pressures, currents, power output, etc. The behavior......This paper presents the latest research results of a project that focuses on normal behavior models for condition monitoring of wind turbines and their components, via ordinary Supervisory Control And Data Acquisition (SCADA) data. In this machine learning approach Adaptive Neuro-Fuzzy Interference...... the component condition Fuzzy Interference System (FIS) structures are used. Based on rules that are established with the prediction error behavior during faults previously experienced and generic rules, the FIS outputs the component condition (green, yellow and red). Furthermore a first diagnosis of the root...

  19. Experimental Modeling of VHTR Plenum Flows during Normal Operation and Pressurized Conduction Cooldown

    Energy Technology Data Exchange (ETDEWEB)

    Glenn E McCreery; Keith G Condie

    2006-09-01

    The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. The present document addresses experimental modeling of flow and thermal mixing phenomena of importance during normal or reduced power operation and during a loss of forced reactor cooling (pressurized conduction cooldown) scenario. The objectives of the experiments are, 1), provide benchmark data for assessment and improvement of codes proposed for NGNP designs and safety studies, and, 2), obtain a better understanding of related phenomena, behavior and needs. Physical models of VHTR vessel upper and lower plenums which use various working fluids to scale phenomena of interest are described. The models may be used to both simulate natural convection conditions during pressurized conduction cooldown and turbulent lower plenum flow during normal or reduced power operation.

  20. Mathematical model and computer code for coated particles performance at normal operating conditions

    International Nuclear Information System (INIS)

    Golubev, I.; Kadarmetov, I.; Makarov, V.

    2002-01-01

    Computer modeling of thermo-mechanical behavior of coated particles during operating both at normal and off-normal conditions has a very significant role particularly on a stage of new reactors development. In Russia a big experience has been accumulated on fabrication and reactor tests of CP and fuel elements with UO 2 kernels. However, this experience cannot be using in full volume for development of a new reactor installation GT-MHR. This is due to very deep burn-up of the fuel based on plutonium oxide (up to 70% fima). Therefore the mathematical modeling of CP thermal-mechanical behavior and failure prediction becomes particularly important. The authors have a clean understanding that serviceability of fuel with high burn-ups are defined not only by thermo-mechanics, but also by structured changes in coating materials, thermodynamics of chemical processes, 'amoeba-effect', formation CO etc. In the report the first steps of development of integrate code for numerical modeling of coated particles behavior and some calculating results concerning the influence of various design parameters on fuel coated particles endurance for GT-MHR normal operating conditions are submitted. A failure model is developed to predict the fraction of TRISO-coated particles. In this model it is assumed that the failure of CP depends not only on probability of SiC-layer fracture but also on the PyC-layers damage. The coated particle is considered as a uniform design. (author)

  1. A satellite relative motion model including J_2 and J_3 via Vinti's intermediary

    Science.gov (United States)

    Biria, Ashley D.; Russell, Ryan P.

    2018-03-01

    Vinti's potential is revisited for analytical propagation of the main satellite problem, this time in the context of relative motion. A particular version of Vinti's spheroidal method is chosen that is valid for arbitrary elliptical orbits, encapsulating J_2, J_3, and generally a partial J_4 in an orbit propagation theory without recourse to perturbation methods. As a child of Vinti's solution, the proposed relative motion model inherits these properties. Furthermore, the problem is solved in oblate spheroidal elements, leading to large regions of validity for the linearization approximation. After offering several enhancements to Vinti's solution, including boosts in accuracy and removal of some singularities, the proposed model is derived and subsequently reformulated so that Vinti's solution is piecewise differentiable. While the model is valid for the critical inclination and nonsingular in the element space, singularities remain in the linear transformation from Earth-centered inertial coordinates to spheroidal elements when the eccentricity is zero or for nearly equatorial orbits. The new state transition matrix is evaluated against numerical solutions including the J_2 through J_5 terms for a wide range of chief orbits and separation distances. The solution is also compared with side-by-side simulations of the original Gim-Alfriend state transition matrix, which considers the J_2 perturbation. Code for computing the resulting state transition matrix and associated reference frame and coordinate transformations is provided online as supplementary material.

  2. General hypothesis and shell model for the synthesis of semiconductor nanotubes, including carbon nanotubes

    Science.gov (United States)

    Mohammad, S. Noor

    2010-09-01

    Semiconductor nanotubes, including carbon nanotubes, have vast potential for new technology development. The fundamental physics and growth kinetics of these nanotubes are still obscured. Various models developed to elucidate the growth suffer from limited applicability. An in-depth investigation of the fundamentals of nanotube growth has, therefore, been carried out. For this investigation, various features of nanotube growth, and the role of the foreign element catalytic agent (FECA) in this growth, have been considered. Observed growth anomalies have been analyzed. Based on this analysis, a new shell model and a general hypothesis have been proposed for the growth. The essential element of the shell model is the seed generated from segregation during growth. The seed structure has been defined, and the formation of droplet from this seed has been described. A modified definition of the droplet exhibiting adhesive properties has also been presented. Various characteristics of the droplet, required for alignment and organization of atoms into tubular forms, have been discussed. Employing the shell model, plausible scenarios for the formation of carbon nanotubes, and the variation in the characteristics of these carbon nanotubes have been articulated. The experimental evidences, for example, for the formation of shell around a core, dipole characteristics of the seed, and the existence of nanopores in the seed, have been presented. They appear to justify the validity of the proposed model. The diversities of nanotube characteristics, fundamentals underlying the creation of bamboo-shaped carbon nanotubes, and the impurity generation on the surface of carbon nanotubes have been elucidated. The catalytic action of FECA on growth has been quantified. The applicability of the proposed model to the nanotube growth by a variety of mechanisms has been elaborated. These mechanisms include the vapor-liquid-solid mechanism, the oxide-assisted growth mechanism, the self

  3. Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models

    Science.gov (United States)

    Biemiller, J.; Lavier, L. L.; Wallace, L.

    2016-12-01

    Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.

  4. A new model for including the effect of fly ash on biochemical methane potential.

    Science.gov (United States)

    Gertner, Pablo; Huiliñir, César; Pinto-Villegas, Paula; Castillo, Alejandra; Montalvo, Silvio; Guerrero, Lorna

    2017-10-01

    The modelling of the effect of trace elements on anaerobic digestion, and specifically the effect of fly ash, has been scarcely studied. Thus, the present work was aimed at the development of a new function that allows accumulated methane models to predict the effect of FA on the volume of methane accumulation. For this, purpose five fly ash concentrations (10, 25, 50, 250 and 500mg/L) using raw and pre-treated sewage sludge were used to calibrate the new function, while three fly ash concentrations were used (40, 150 and 350mg/L) for validation. Three models for accumulated methane volume (the modified Gompertz equation, the logistic function, and the transfer function) were evaluated. The results showed that methane production increased in the presence of FA when the sewage sludge was not pre-treated, while with pretreated sludge there is inhibition of methane production at FA concentrations higher than 50mg/L. In the calibration of the proposed function, it fits well with the experimental data under all the conditions, including the inhibition and stimulating zones, with the values of the parameters of the methane production models falling in the range of those reported in the literature. For validation experiments, the model succeeded in representing the behavior of new experiments in both the stimulating and inhibiting zones, with NRMSE and R 2 ranging from 0.3577 to 0.03714 and 0.2209 to 0.9911, respectively. Thus, the proposed model is robust and valid for the studied conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A curved multi-component aerosol hygroscopicity model framework: Part 2 – Including organic compounds

    Directory of Open Access Journals (Sweden)

    D. O. Topping

    2005-01-01

    Full Text Available This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Kohler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5–6 dicarboxylic acids, down to low relative humidity conditions. By comparison with electrodynamic balance data, it was also found that the model was capable of capturing the behaviour of aqueous aerosols containing Suwannee River Fulvic acid, a structure previously used to represent the functionality of complex oxidised macromolecules often found in atmospheric aerosols. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between model predictions and measurements increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly

  6. A curved multi-component aerosol hygroscopicity model framework: Part 2 Including organic compounds

    Science.gov (United States)

    Topping, D. O.; McFiggans, G. B.; Coe, H.

    2005-05-01

    This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM) framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Kohler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5-6 dicarboxylic acids, down to low relative humidity conditions. By comparison with electrodynamic balance data, it was also found that the model was capable of capturing the behaviour of aqueous aerosols containing Suwannee River Fulvic acid, a structure previously used to represent the functionality of complex oxidised macromolecules often found in atmospheric aerosols. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between model predictions and measurements increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly soluble in water

  7. A generalized model for optimal transport of images including dissipation and density modulation

    KAUST Repository

    Maas, Jan

    2015-11-01

    © EDP Sciences, SMAI 2015. In this paper the optimal transport and the metamorphosis perspectives are combined. For a pair of given input images geodesic paths in the space of images are defined as minimizers of a resulting path energy. To this end, the underlying Riemannian metric measures the rate of transport cost and the rate of viscous dissipation. Furthermore, the model is capable to deal with strongly varying image contrast and explicitly allows for sources and sinks in the transport equations which are incorporated in the metric related to the metamorphosis approach by Trouvé and Younes. In the non-viscous case with source term existence of geodesic paths is proven in the space of measures. The proposed model is explored on the range from merely optimal transport to strongly dissipative dynamics. For this model a robust and effective variational time discretization of geodesic paths is proposed. This requires to minimize a discrete path energy consisting of a sum of consecutive image matching functionals. These functionals are defined on corresponding pairs of intensity functions and on associated pairwise matching deformations. Existence of time discrete geodesics is demonstrated. Furthermore, a finite element implementation is proposed and applied to instructive test cases and to real images. In the non-viscous case this is compared to the algorithm proposed by Benamou and Brenier including a discretization of the source term. Finally, the model is generalized to define discrete weighted barycentres with applications to textures and objects.

  8. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  9. A multiscale model for glioma spread including cell-tissue interactions and proliferation.

    Science.gov (United States)

    Engwer, Christian; Knappitsch, Markus; Surulescu, Christina

    2016-04-01

    Glioma is a broad class of brain and spinal cord tumors arising from glia cells, which are the main brain cells that can develop into neoplasms. They are highly invasive and lead to irregular tumor margins which are not precisely identifiable by medical imaging, thus rendering a precise enough resection very difficult. The understanding of glioma spread patterns is hence essential for both radiological therapy as well as surgical treatment. In this paper we propose a multiscale model for glioma growth including interactions of the cells with the underlying tissue network, along with proliferative effects. Our current accounting for two subpopulations of cells to accomodate proliferation according to the go-or-grow dichtomoty is an extension of the setting in [16]. As in that paper, we assume that cancer cells use neuronal fiber tracts as invasive pathways. Hence, the individual structure of brain tissue seems to be decisive for the tumor spread. Diffusion tensor imaging (DTI) is able to provide such information, thus opening the way for patient specific modeling of glioma invasion. Starting from a multiscale model involving subcellular (microscopic) and individual (mesoscale) cell dynamics, we perform a parabolic scaling to obtain an approximating reaction-diffusion-transport equation on the macroscale of the tumor cell population. Numerical simulations based on DTI data are carried out in order to assess the performance of our modeling approach.

  10. Habitability of super-Earth planets around other suns: models including Red Giant Branch evolution.

    Science.gov (United States)

    von Bloh, W; Cuntz, M; Schröder, K-P; Bounama, C; Franck, S

    2009-01-01

    The unexpected diversity of exoplanets includes a growing number of super-Earth planets, i.e., exoplanets with masses of up to several Earth masses and a similar chemical and mineralogical composition as Earth. We present a thermal evolution model for a 10 Earth-mass planet orbiting a star like the Sun. Our model is based on the integrated system approach, which describes the photosynthetic biomass production and takes into account a variety of climatological, biogeochemical, and geodynamical processes. This allows us to identify a so-called photosynthesis-sustaining habitable zone (pHZ), as determined by the limits of biological productivity on the planetary surface. Our model considers solar evolution during the main-sequence stage and along the Red Giant Branch as described by the most recent solar model. We obtain a large set of solutions consistent with the principal possibility of life. The highest likelihood of habitability is found for "water worlds." Only mass-rich water worlds are able to realize pHZ-type habitability beyond the stellar main sequence on the Red Giant Branch.

  11. Combined thermal and elastic modeling of the normal and tumorous breast

    Science.gov (United States)

    Jiang, Li; Zhan, Wang; Loew, Murray

    2008-03-01

    The abnormal thermogram has been shown to be a reliable indicator of a high risk of breast cancer, but an open question is how to quantify the complex relationships between the breast thermal behaviors and the underlying physiological/pathological conditions. Previous thermal modeling techniques generally did not utilize the breast geometry determined by the gravity-induced elastic deformations arising from various body postures. In this paper, a 3-D finite-element method is developed for combined modeling of the thermal and elastic properties of the breast, including the mechanical nonlinearity associated with large deformations. The effects of the thermal and elastic properties of the breast tissues are investigated quantitatively. For the normal breast in a standing/sitting up posture, the gravity-induced deformation alone is found to be able to cause an asymmetric temperature distribution even though all the thermal/elastic properties are symmetrical, and this temperature asymmetry increases for softer and more compressible breast tissues. For a tumorous breast, we found that the surface-temperature alterations generally can be recognizable for superficial tumors at depths less than 20 mm. Tumor size plays a less important role than the tumor depth in determining the tumor-induced temperature difference. This result may imply that a higher thermal sensitivity is critical for a breast thermogram system when deeper tumors are present, even if the tumor is relatively large. We expect this new method to provide a stronger foundation for, and greater specificity and precision in, thermographic diagnosis and treatment of breast tumors.

  12. Analysis of electronic models for solar cells including energy resolved defect densities

    Energy Technology Data Exchange (ETDEWEB)

    Glitzky, Annegret

    2010-07-01

    We introduce an electronic model for solar cells including energy resolved defect densities. The resulting drift-diffusion model corresponds to a generalized van Roosbroeck system with additional source terms coupled with ODEs containing space and energy as parameters for all defect densities. The system has to be considered in heterostructures and with mixed boundary conditions from device simulation. We give a weak formulation of the problem. If the boundary data and the sources are compatible with thermodynamic equilibrium the free energy along solutions decays monotonously. In other cases it may be increasing, but we estimate its growth. We establish boundedness and uniqueness results and prove the existence of a weak solution. This is done by considering a regularized problem, showing its solvability and the boundedness of its solutions independent of the regularization level. (orig.)

  13. Effect of including decay chains on predictions of equilibrium-type terrestrial food chain models

    International Nuclear Information System (INIS)

    Kirchner, G.

    1990-01-01

    Equilibrium-type food chain models are commonly used for assessing the radiological impact to man from environmental releases of radionuclides. Usually these do not take into account build-up of radioactive decay products during environmental transport. This may be a potential source of underprediction. For estimating consequences of this simplification, the equations of an internationally recognised terrestrial food chain model have been extended to include decay chains of variable length. Example calculations show that for releases from light water reactors as expected both during routine operation and in the case of severe accidents, the build-up of decay products during environmental transport is generally of minor importance. However, a considerable number of radionuclides of potential radiological significance have been identified which show marked contributions of decay products to calculated contamination of human food and resulting radiation dose rates. (author)

  14. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS

    Science.gov (United States)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2010-12-01

    With 4 million ha currently grown for ethanol in Brazil only, approximately half the global bioethanol production in 2005 (Smeets 2008), and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Indeed, ethanol made from biomass is currently the most widespread option for alternative transportation fuels. It was originally promoted as a carbon neutral energy resource that could bring energy independence to countries and local opportunities to farmers, until attention was drawn to its environmental and socio-economical drawbacks. It is still not clear to which extent it is a solution or a contributor to climate change mitigation. Dynamic Global Vegetation models can help address these issues and quantify the potential impacts of biofuels on ecosystems at scales ranging from on-site to global. The global agro-ecosystem model ORCHIDEE describes water, carbon and energy exchanges at the soil-atmosphere interface for a limited number of natural and agricultural vegetation types. In order to integrate agricultural management to the simulations and to capture more accurately the specificity of crops' phenology, ORCHIDEE has been coupled with the agronomical model STICS. The resulting crop-oriented vegetation model ORCHIDEE-STICS has been used so far to simulate temperate crops such as wheat, corn and soybean. As a generic ecosystem model, each grid cell can include several vegetation types with their own phenology and management practices, making it suitable to spatial simulations. Here, ORCHIDEE-STICS is altered to include sugar cane as a new agricultural Plant functional Type, implemented and parametrized using the STICS approach. An on-site calibration and validation is then performed based on biomass and flux chamber measurements in several sites in Australia and variables such as LAI, dry weight, heat fluxes and respiration are used to evaluate the ability of the model to simulate the specific

  15. The prevalence of polycystic ovary syndrome in a normal population according to the Rotterdam criteria versus revised criteria including anti-Müllerian hormone

    DEFF Research Database (Denmark)

    Lauritsen, M P; Bentzen, J G; Pinborg, A

    2014-01-01

    -anovulation, clinical and/or biochemical hyperandrogenism and polycystic ovaries (AFC ≥ 12 and/or ovarian volume >10 ml). However, with the advances in sonography, the relevance of the AFC threshold in the definition of polycystic ovaries has been challenged, and AMH has been proposed as a marker of polycystic ovaries...... in PCOS. STUDY DESIGN, SIZE, DURATION: From 2008 to 2010, a prospective, cross-sectional study was performed including 863 women aged 20-40 years and employed at Copenhagen University Hospital, Rigshospitalet, Denmark. PARTICIPANTS/MATERIAL, SETTING, METHODS: We studied a subgroup of 447 women with a mean...... > 19 or AMH > 35 pmol/l, the prevalence of PCOS was 6.3 or 8.5%, respectively, and in the age groups health-care workers...

  16. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  17. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  18. Web-accessible molecular modeling with Rosetta: The Rosetta Online Server that Includes Everyone (ROSIE).

    Science.gov (United States)

    Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J

    2018-01-01

    The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.

  19. Thomas Kuhn's 'Structure of Scientific Revolutions' applied to exercise science paradigm shifts: example including the Central Governor Model.

    Science.gov (United States)

    Pires, Flávio de Oliveira; de Oliveira Pires, Flávio

    2013-07-01

    According to Thomas Kuhn, the scientific progress of any discipline could be distinguished by a pre-paradigm phase, a normal science phase and a revolution phase. The science advances when a scientific revolution takes place after silent period of normal science and the scientific community moves ahead to a paradigm shift. I suggest there has been a recent change of course in the direction of the exercise science. According to the 'current paradigm', exercise would be probably limited by alterations in either central command or peripheral skeletal muscles, and fatigue would be developed in a task-dependent manner. Instead, the central governor model (GCM) has proposed that all forms of exercise are centrally-regulated, the central nervous system would calculate the metabolic cost required to complete a task in order to avoid catastrophic body failure. Some have criticized the CGM and supported the traditional interpretation, but recently the scientific community appears to have begun an intellectual trajectory to accept this theory. First, the increased number of citations of articles that have supported the CGM could indicate that the community has changed the focus. Second, relevant journals have devoted special editions to promote the debate on subjects challenged by the CGM. Finally, scientists from different fields have recognized mechanisms included in the CGM to understand the exercise limits. Given the importance of the scientific community in demarcating a Kuhnian paradigm shift, I suggest that these three aspects could indicate an increased acceptance of a centrally-regulated effort model, to understand the limits of exercise.

  20. A curved multi-component aerosol hygroscopicity model framework: 2 Including organics

    Science.gov (United States)

    Topping, D. O.; McFiggans, G. B.; Coe, H.

    2004-12-01

    This paper describes the inclusion of organic particulate material within the Aerosol Diameter Dependent Equilibrium Model (ADDEM) framework described in the companion paper applied to inorganic aerosol components. The performance of ADDEM is analysed in terms of its capability to reproduce the behaviour of various organic and mixed inorganic/organic systems using recently published bulk data. Within the modelling architecture already described two separate thermodynamic models are coupled in an additive approach and combined with a method for solving the Köhler equation in order to develop a tool for predicting the water content associated with an aerosol of known inorganic/organic composition and dry size. For development of the organic module, the widely used group contribution method UNIFAC is employed to explicitly deal with the non-ideality in solution. The UNIFAC predictions for components of atmospheric importance were improved considerably by using revised interaction parameters derived from electro-dynamic balance studies. Using such parameters, the model was found to adequately describe mixed systems including 5-6 dicarboxylic acids, down to low relative humidity conditions. The additive approach for modelling mixed inorganic/organic systems worked well for a variety of mixtures. As expected, deviations between predicted and measured data increase with increasing concentration. Available surface tension models, used in evaluating the Kelvin term, were found to reproduce measured data with varying success. Deviations from experimental data increased with increased organic compound complexity. For components only slightly soluble in water, significant deviations from measured surface tension depression behaviour were predicted with both model formalisms tested. A Sensitivity analysis showed that such variation is likely to lead to predicted growth factors within the measurement uncertainty for growth factor taken in the sub-saturated regime. Greater

  1. A Hydrological Concept including Lateral Water Flow Compatible with the Biogeochemical Model ForSAFE

    Directory of Open Access Journals (Sweden)

    Giuliana Zanchi

    2016-03-01

    Full Text Available The study presents a hydrology concept developed to include lateral water flow in the biogeochemical model ForSAFE. The hydrology concept was evaluated against data collected at Svartberget in the Vindeln Research Forest in Northern Sweden. The results show that the new concept allows simulation of a saturated and an unsaturated zone in the soil as well as water flow that reaches the stream comparable to measurements. The most relevant differences compared to streamflow measurements are that the model simulates a higher base flow in winter and lower flow peaks after snowmelt. These differences are mainly caused by the assumptions made to regulate the percolation at the bottom of the simulated soil columns. The capability for simulating lateral flows and a saturated zone in ForSAFE can greatly improve the simulation of chemical exchange in the soil and export of elements from the soil to watercourses. Such a model can help improve the understanding of how environmental changes in the forest landscape will influence chemical loads to surface waters.

  2. Integrated Sachs-Wolfe effect in a quintessence cosmological model: Including anisotropic stress of dark energy

    International Nuclear Information System (INIS)

    Wang, Y. T.; Xu, L. X.; Gui, Y. X.

    2010-01-01

    In this paper, we investigate the integrated Sachs-Wolfe effect in the quintessence cold dark matter model with constant equation of state and constant speed of sound in dark energy rest frame, including dark energy perturbation and its anisotropic stress. Comparing with the ΛCDM model, we find that the integrated Sachs-Wolfe (ISW)-power spectrums are affected by different background evolutions and dark energy perturbation. As we change the speed of sound from 1 to 0 in the quintessence cold dark matter model with given state parameters, it is found that the inclusion of dark energy anisotropic stress makes the variation of magnitude of the ISW source uncertain due to the anticorrelation between the speed of sound and the ratio of dark energy density perturbation contrast to dark matter density perturbation contrast in the ISW-source term. Thus, the magnitude of the ISW-source term is governed by the competition between the alterant multiple of (1+3/2xc-circumflex s 2 ) and that of δ de /δ m with the variation of c-circumflex s 2 .

  3. Nitroglycerin provocation in normal subjects is not a useful human migraine model?

    DEFF Research Database (Denmark)

    Tvedskov, J F; Iversen, Helle Klingenberg; Olesen, J

    2010-01-01

    Provoking delayed migraine with nitroglycerin in migraine sufferers is a cumbersome model. Patients are difficult to recruit, migraine comes on late and variably and only 50-80% of patients develop an attack. A model using normal volunteers would be much more useful, but it should be validated...... aspirin 1000 mg, zolmitriptan 5 mg or placebo to normal healthy volunteers. The design was double-blind, placebo-controlled three-way crossover. Our hypothesis was that these drugs would be effective in the treatment of the mild constant headache induced by long-lasting GTN infusion. The headaches did...... experiment suggests that headache caused by direct nitric oxide (NO) action in the continued presence of NO is very resistance to analgesics and to specific acute migraine treatments. This suggests that NO works very deep in the cascade of events associated with vascular headache, whereas tested drugs work...

  4. Recognition of sine wave modeled consonants by normal hearing and hearing-impaired individuals

    Science.gov (United States)

    Balachandran, Rupa

    Sine wave modeling is a parametric tool for representing the speech signal with a limited number of sine waves. It involves replacing the peaks of the speech spectrum with sine waves and discarding the rest of the lower amplitude components during synthesis. It has the potential to be used as a speech enhancement technique for hearing-impaired adults. The present study answers the following basic questions: (1) Are sine wave synthesized speech tokens more intelligible than natural speech tokens? (2) What is the effect of varying the number of sine waves on consonant recognition in quiet? (3) What is the effect of varying the number of sine waves on consonant recognition in noise? (4) How does sine wave modeling affect the transmission of speech feature in quiet and in noise? (5) Are there differences in recognition performance between normal hearing and hearing-impaired listeners? VCV syllables representing 20 consonants (/p/, /t/, /k/, /b/, /d/, /g/, /f/, /theta/, /s/, /∫/, /v/, /z/, /t∫/, /dy/, /j/, /w/, /r/, /l/, /m/, /n/) in three vowel contexts (/a/, /i/, /u/) were modeled with 4, 8, 12, and 16 sine waves. A consonant recognition task was performed in quiet, and in background noise (+10 dB and 0 dB SNR). Twenty hearing-impaired listeners and six normal hearing listeners were tested under headphones at their most comfortable listening level. The main findings were: (1) Recognition of unprocessed speech was better that of sine wave modeled speech. (2) Asymptotic performance was reached with 8 sine waves in quiet for both normal hearing and hearing-impaired listeners. (3) Consonant recognition performance in noise improved with increasing number of sine waves. (4) As the number of sine waves was decreased, place information was lost first, followed by manner, and finally voicing. (5) Hearing-impaired listeners made more errors then normal hearing listeners, but there were no differences in the error patterns made by both groups.

  5. Inflammatory Cytokine Tumor Necrosis Factor α Confers Precancerous Phenotype in an Organoid Model of Normal Human Ovarian Surface Epithelial Cells

    Directory of Open Access Journals (Sweden)

    Joseph Kwong

    2009-06-01

    Full Text Available In this study, we established an in vitro organoid model of normal human ovarian surface epithelial (HOSE cells. The spheroids of these normal HOSE cells resembled epithelial inclusion cysts in human ovarian cortex, which are the cells of origin of ovarian epithelial tumor. Because there are strong correlations between chronic inflammation and the incidence of ovarian cancer, we used the organoid model to test whether protumor inflammatory cytokine tumor necrosis factor α would induce malignant phenotype in normal HOSE cells. Prolonged treatment of tumor necrosis factor α induced phenotypic changes of the HOSE spheroids, which exhibited the characteristics of precancerous lesions of ovarian epithelial tumors, including reinitiation of cell proliferation, structural disorganization, epithelial stratification, loss of epithelial polarity, degradation of basement membrane, cell invasion, and overexpression of ovarian cancer markers. The result of this study provides not only an evidence supporting the link between chronic inflammation and ovarian cancer formation but also a relevant and novel in vitro model for studying of early events of ovarian cancer.

  6. Modelling and control of a microgrid including photovoltaic and wind generation

    Science.gov (United States)

    Hussain, Mohammed Touseef

    Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.

  7. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    Science.gov (United States)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  8. Publicly available models to predict normal boiling point of organic compounds

    International Nuclear Information System (INIS)

    Oprisiu, Ioana; Marcou, Gilles; Horvath, Dragos; Brunel, Damien Bernard; Rivollet, Fabien; Varnek, Alexandre

    2013-01-01

    Quantitative structure–property models to predict the normal boiling point (T b ) of organic compounds were developed using non-linear ASNNs (associative neural networks) as well as multiple linear regression – ISIDA-MLR and SQS (stochastic QSAR sampler). Models were built on a diverse set of 2098 organic compounds with T b varying in the range of 185–491 K. In ISIDA-MLR and ASNN calculations, fragment descriptors were used, whereas fragment, FPTs (fuzzy pharmacophore triplets), and ChemAxon descriptors were employed in SQS models. Prediction quality of the models has been assessed in 5-fold cross validation. Obtained models were implemented in the on-line ISIDA predictor at (http://infochim.u-strasbg.fr/webserv/VSEngine.html)

  9. Normal linear models with genetically structured residual variance heterogeneity: a case study

    DEFF Research Database (Denmark)

    Sorensen, Daniel; Waagepetersen, Rasmus Plenge

    2003-01-01

    Normal mixed models with different levels of heterogeneity in the residual variance are fitted to pig litter size data. Exploratory analysis and model assessment is based on examination of various posterior predictive distributions. Comparisons based on Bayes factors and related criteria favour...... models with a genetically structured residual variance heterogeneity. There is, moreover, strong evidence of a negative correlation between the additive genetic values affecting litter size and those affecting residual variance. The models are also compared according to the purposes for which they might...... be used, such as prediction of 'future' data, inference about response to selection and ranking candidates for selection. A brief discussion is given of some implications for selection of the genetically structured residual variance model....

  10. Hydrodynamic Cucker-Smale model with normalized communication weights and time delay

    KAUST Repository

    Choi, Young-Pil

    2017-07-17

    We study a hydrodynamic Cucker-Smale-type model with time delay in communication and information processing, in which agents interact with each other through normalized communication weights. The model consists of a pressureless Euler system with time delayed non-local alignment forces. We resort to its Lagrangian formulation and prove the existence of its global in time classical solutions. Moreover, we derive a sufficient condition for the asymptotic flocking behavior of the solutions. Finally, we show the presence of a critical phenomenon for the Eulerian system posed in the spatially one-dimensional setting.

  11. Exergoeconomic performance optimization for a steady-flow endoreversible refrigeration model including six typical cycles

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Lingen; Kan, Xuxian; Sun, Fengrui; Wu, Feng [College of Naval Architecture and Power, Naval University of Engineering, Wuhan 430033 (China)

    2013-07-01

    The operation of a universal steady flow endoreversible refrigeration cycle model consisting of a constant thermal-capacity heating branch, two constant thermal-capacity cooling branches and two adiabatic branches is viewed as a production process with exergy as its output. The finite time exergoeconomic performance optimization of the refrigeration cycle is investigated by taking profit rate optimization criterion as the objective. The relations between the profit rate and the temperature ratio of working fluid, between the COP (coefficient of performance) and the temperature ratio of working fluid, as well as the optimal relation between profit rate and the COP of the cycle are derived. The focus of this paper is to search the compromised optimization between economics (profit rate) and the utilization factor (COP) for endoreversible refrigeration cycles, by searching the optimum COP at maximum profit, which is termed as the finite-time exergoeconomic performance bound. Moreover, performance analysis and optimization of the model are carried out in order to investigate the effect of cycle process on the performance of the cycles using numerical example. The results obtained herein include the performance characteristics of endoreversible Carnot, Diesel, Otto, Atkinson, Dual and Brayton refrigeration cycles.

  12. Intestinal bacterial overgrowth includes potential pathogens in the carbohydrate overload models of equine acute laminitis.

    Science.gov (United States)

    Onishi, Janet C; Park, Joong-Wook; Prado, Julio; Eades, Susan C; Mirza, Mustajab H; Fugaro, Michael N; Häggblom, Max M; Reinemeyer, Craig R

    2012-10-12

    Carbohydrate overload models of equine acute laminitis are used to study the development of lameness. It is hypothesized that a diet-induced shift in cecal bacterial communities contributes to the development of the pro-inflammatory state that progresses to laminar failure. It is proposed that vasoactive amines, protease activators and endotoxin, all bacterial derived bioactive metabolites, play a role in disease development. Questions regarding the oral bioavailability of many of the bacterial derived bioactive metabolites remain. This study evaluates the possibility that a carbohydrate-induced overgrowth of potentially pathogenic cecal bacteria occurs and that bacterial translocation contributes toward the development of the pro-inflammatory state. Two groups of mixed-breed horses were used, those with laminitis induced by cornstarch (n=6) or oligofructan (n=6) and non-laminitic controls (n=8). Cecal fluid and tissue homogenates of extra-intestinal sites including the laminae were used to enumerate Gram-negative and -positive bacteria. Horses that developed Obel grade2 lameness, revealed a significant overgrowth of potentially pathogenic Gram-positive and Gram-negative intestinal bacteria within the cecal fluid. Although colonization of extra-intestinal sites with potentially pathogenic bacteria was not detected, results of this study indicate that cecal/colonic lymphadenopathy and eosinophilia develop in horses progressing to lameness. It is hypothesized that the pro-inflammatory state in carbohydrate overload models of equine acute laminitis is driven by an immune response to the rapid overgrowth of Gram-positive and Gram-negative cecal bacterial communities in the gut. Further equine research is indicated to study the immunological response, involving the lymphatic system that develops in the model. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. CFD simulations and reduced order modeling of a refrigerator compartment including radiation effects

    International Nuclear Information System (INIS)

    Bayer, Ozgur; Oskay, Ruknettin; Paksoy, Akin; Aradag, Selin

    2013-01-01

    Highlights: ► Free convection in a refrigerator is simulated including radiation effects. ► Heat rates are affected drastically when radiation effects are considered. ► 95% of the flow energy can be represented by using one spatial POD mode. - Abstract: Considering the engineering problem of natural convection in domestic refrigerator applications, this study aims to simulate the fluid flow and temperature distribution in a single commercial refrigerator compartment by using the experimentally determined temperature values as the specified constant wall temperature boundary conditions. The free convection in refrigerator applications is evaluated as a three-dimensional (3D), turbulent, transient and coupled non-linear flow problem. Radiation heat transfer mode is also included in the analysis. According to the results, taking radiation effects into consideration does not change the temperature distribution inside the refrigerator significantly; however the heat rates are affected drastically. The flow inside the compartment is further analyzed with a reduced order modeling method called Proper Orthogonal Decomposition (POD) and the energy contents of several spatial and temporal modes that exist in the flow are examined. The results show that approximately 95% of all the flow energy can be represented by only using one spatial mode

  14. Pharmacokinetic-pharmacodynamic modeling of diclofenac in normal and Freund's complete adjuvant-induced arthritic rats

    Science.gov (United States)

    Zhang, Jing; Li, Pei; Guo, Hai-fang; Liu, Li; Liu, Xiao-dong

    2012-01-01

    Aim: To characterize pharmacokinetic-pharmacodynamic modeling of diclofenac in Freund's complete adjuvant (FCA)-induced arthritic rats using prostaglandin E2 (PGE2) as a biomarker. Methods: The pharmacokinetics of diclofenac was investigated using 20-day-old arthritic rats. PGE2 level in the rats was measured using an enzyme immunoassay. A pharmacokinetic-pharmacodynamic (PK-PD) model was developed to illustrate the relationship between the plasma concentration of diclofenac and the inhibition of PGE2 production. The inhibition of diclofenac on lipopolysaccharide (LPS)-induced PGE2 production in blood cells was investigated in vitro. Results: Similar pharmacokinetic behavior of diclofenac was found both in normal and FCA-induced arthritic rats. Diclofenac significantly decreased the plasma levels of PGE2 in both normal and arthritic rats. The inhibitory effect on PGE2 levels in the plasma was in proportion to the plasma concentration of diclofenac. No delay in the onset of inhibition was observed, suggesting that the effect compartment was located in the central compartment. An inhibitory effect sigmoid Imax model was selected to characterize the relationship between the plasma concentration of diclofenac and the inhibition of PGE2 production in vivo. The Imax model was also used to illustrate the inhibition of diclofenac on LPS-induced PGE2 production in blood cells in vitro. Conclusion: Arthritis induced by FCA does not alter the pharmacokinetic behaviors of diclofenac in rats, but the pharmacodynamics of diclofenac is slightly affected. A PK-PD model characterizing an inhibitory effect sigmoid Imax can be used to fit the relationship between the plasma PGE2 and diclofenac levels in both normal rats and FCA-induced arthritic rats. PMID:22842736

  15. A transition-based joint model for disease named entity recognition and normalization.

    Science.gov (United States)

    Lou, Yinxia; Zhang, Yue; Qian, Tao; Li, Fei; Xiong, Shufeng; Ji, Donghong

    2017-08-01

    Disease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this. We propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies. We evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches. Data and code are available at https://github.com/louyinxia/jointRN. dhji@whu.edu.cn. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Analysis of TLM Air-vent Model Applicability to EMC Problems for Normal Incident Plane Wave

    Directory of Open Access Journals (Sweden)

    N. J. Nešić

    2016-11-01

    Full Text Available In this paper, the shielding properties related to a protective metal enclosure with airflow aperture arrays are numerically analyzed. As a numerical model, a TLM method, either in a conventional form based on fine mesh to describe apertures presence or enhanced with the compact air-vent model is employed. The main focus in the paper is on examining the limits of applying the compact air-vent model for EMC problems solving. Namely, various values for the distance between neighboring apertures in the TLM air-vent models as well as the air-vent thicknesses are analyzed. Specifically, the analyses are conducted for a normal incident plane wave, vertically and horizontally polarized.

  17. γ-Hadron family description by quasi-scaling model at normal nuclear composition of primary cosmic rays

    International Nuclear Information System (INIS)

    Kalmakhelidze, M.; Roinishvili, N.; Svanidze, M.

    2002-01-01

    Primary Cosmic Rays Nuclear Composition was investigated in energy region 10 15 -10 16 eV. The study is based on comparison of γ hadron families observed by Pamir and Pamir-Chacaltaya collaborations with those generated by means of quasi-scaling model MC0 at different nuclear compositions. It was shown that all characteristics of the observed families (including their intensity) are in very good agreement with properties of simulated events MC0 at normal composition and are in disagreement at heavy dominant compositions [ru

  18. A review of shear strength models for rock joints subjected to constant normal stiffness

    Directory of Open Access Journals (Sweden)

    Sivanathan Thirukumaran

    2016-06-01

    Full Text Available The typical shear behaviour of rough joints has been studied under constant normal load/stress (CNL boundary conditions, but recent studies have shown that this boundary condition may not replicate true practical situations. Constant normal stiffness (CNS is more appropriate to describe the stress–strain response of field joints since the CNS boundary condition is more realistic than CNL. The practical implications of CNS are movements of unstable blocks in the roof or walls of an underground excavation, reinforced rock wedges sliding in a rock slope or foundation, and the vertical movement of rock-socketed concrete piles. In this paper, the highlights and limitations of the existing models used to predict the shear strength/behaviour of joints under CNS conditions are discussed in depth.

  19. Improving normal tissue complication probability models: the need to adopt a "data-pooling" culture.

    Science.gov (United States)

    Deasy, Joseph O; Bentzen, Søren M; Jackson, Andrew; Ten Haken, Randall K; Yorke, Ellen D; Constine, Louis S; Sharma, Ashish; Marks, Lawrence B

    2010-03-01

    Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. Copyright 2010 Elsevier Inc. All rights reserved.

  20. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  1. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    Science.gov (United States)

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  2. Comparing the Effects of Particulate Matter on the Ocular Surfaces of Normal Eyes and a Dry Eye Rat Model.

    Science.gov (United States)

    Han, Ji Yun; Kang, Boram; Eom, Youngsub; Kim, Hyo Myung; Song, Jong Suk

    2017-05-01

    To compare the effect of exposure to particulate matter on the ocular surface of normal and experimental dry eye (EDE) rat models. Titanium dioxide (TiO2) nanoparticles were used as the particulate matter. Rats were divided into 4 groups: normal control group, TiO2 challenge group of the normal model, EDE control group, and TiO2 challenge group of the EDE model. After 24 hours, corneal clarity was compared and tear samples were collected for quantification of lactate dehydrogenase, MUC5AC, and tumor necrosis factor-α concentrations. The periorbital tissues were used to evaluate the inflammatory cell infiltration and detect apoptotic cells. The corneal clarity score was greater in the EDE model than in the normal model. The score increased after TiO2 challenge in each group compared with each control group (normal control vs. TiO2 challenge group, 0.0 ± 0.0 vs. 0.8 ± 0.6, P = 0.024; EDE control vs. TiO2 challenge group, 2.2 ± 0.6 vs. 3.8 ± 0.4, P = 0.026). The tear lactate dehydrogenase level and inflammatory cell infiltration on the ocular surface were higher in the EDE model than in the normal model. These measurements increased significantly in both normal and EDE models after TiO2 challenge. The tumor necrosis factor-α levels and terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling-positive cells were also higher in the EDE model than in the normal model. TiO2 nanoparticle exposure on the ocular surface had a more prominent effect in the EDE model than it did in the normal model. The ocular surface of dry eyes seems to be more vulnerable to fine dust of air pollution than that of normal eyes.

  3. Evolution of the continental upper mantle : numerical modelling of thermo-chemical convection including partial melting

    NARCIS (Netherlands)

    de Smet, J.H.

    1999-01-01

    This thesis elaborates on the evolution of the continental upper mantle based on numerical modelling results. The descriptive and explanatory basis is formed by a numerical thermo-chemical convection model. The model evolution starts in the early Archaean about 4 billion years ago. The model follows

  4. Evolution of the continental upper mantle : numerical modelling of thermo-chemical convection including partial melting

    NARCIS (Netherlands)

    Smet, J.H. de

    1999-01-01

    This thesis elaborates on the evolution of the continental upper mantle based on numerical modelling results. The descriptive and explanatory basis is formed by a numerical thermo-chemical convection model. The model evolution starts in the early Archaean about 4 billion years ago. The model

  5. Evaluation of gap heat transfer model in ELESTRES for CANDU fuel element under normal operating conditions

    International Nuclear Information System (INIS)

    Lee, Kang Moon; Ohn, Myung Ryong; Im, Hong Sik; Choi, Jong Hoh; Hwang, Soon Taek

    1995-01-01

    The gap conductance between the fuel and the sheath depends strongly on the gap width and has a significant influence on the amount of initial stored energy. The modified Ross and Stoute gap conductance model in ELESTRES is based on a simplified thermal deformation model for steady-state fuel temperature calculations. A review on a series of experiments reveals that fuel pellets crack, relocate, and are eccentrically positioned within the sheath rather than solid concentric cylinders. In this paper, the two recently-proposed gap conductance models (offset gap model and relocated gap model) are described and are applied to calculate the fuel-sheath gap conductances under experimental conditions and normal operating conditions in CANDU reactors. The good agreement between the experimentally-inferred and calculated gap conductance values demonstrates that the modified Ross and Stoute model was implemented correctly in ELESTRES. The predictions of the modified Ross and Stoute model provide conservative values for gap heat transfer and fuel surface temperature compared to the offset gap and relocated gap models for a limiting power envelope. 13 figs., 3 tabs., 16 refs. (Author)

  6. A Thermal Evolution Model of the Earth Including the Biosphere, Continental Growth and Mantle Hydration

    Science.gov (United States)

    Höning, D.; Spohn, T.

    2014-12-01

    By harvesting solar energy and converting it to chemical energy, photosynthetic life plays an important role in the energy budget of Earth [2]. This leads to alterations of chemical reservoirs eventually affecting the Earth's interior [4]. It further has been speculated [3] that the formation of continents may be a consequence of the evolution life. A steady state model [1] suggests that the Earth without its biosphere would evolve to a steady state with a smaller continent coverage and a dryer mantle than is observed today. We present a model including (i) parameterized thermal evolution, (ii) continental growth and destruction, and (iii) mantle water regassing and outgassing. The biosphere enhances the production rate of sediments which eventually are subducted. These sediments are assumed to (i) carry water to depth bound in stable mineral phases and (ii) have the potential to suppress shallow dewatering of the underlying sediments and crust due to their low permeability. We run a Monte Carlo simulation for various initial conditions and treat all those parameter combinations as success which result in the fraction of continental crust coverage observed for present day Earth. Finally, we simulate the evolution of an abiotic Earth using the same set of parameters but a reduced rate of continental weathering and erosion. Our results suggest that the origin and evolution of life could have stabilized the large continental surface area of the Earth and its wet mantle, leading to the relatively low mantle viscosity we observe at present. Without photosynthetic life on our planet, the Earth would be geodynamical less active due to a dryer mantle, and would have a smaller fraction of continental coverage than observed today. References[1] Höning, D., Hansen-Goos, H., Airo, A., Spohn, T., 2014. Biotic vs. abiotic Earth: A model for mantle hydration and continental coverage. Planetary and Space Science 98, 5-13. [2] Kleidon, A., 2010. Life, hierarchy, and the

  7. Mixture modeling methods for the assessment of normal and abnormal personality, part II: longitudinal models.

    Science.gov (United States)

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.

  8. Bifactor model of WISC-IV: Applicability and measurement invariance in low and normal IQ groups.

    Science.gov (United States)

    Gomez, Rapson; Vance, Alasdair; Watson, Shaun

    2017-07-01

    This study examined the applicability and measurement invariance of the bifactor model of the 10 Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) core subtests in groups of children and adolescents (age range from 6 to 16 years) with low (IQ ≤79; N = 229; % male = 75.9) and normal (IQ ≥80; N = 816; % male = 75.0) IQ scores. Results supported this model in both groups, and there was good support for measurement invariance for this model across these groups. For all participants together, the omega hierarchical and explained common variance (ECV) values were high for the general factor and low to negligible for the specific factors. Together, the findings favor the use of the Full Scale IQ (FSIQ) scores of the WISC-IV, but not the subscale index scores. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Track etching model for normal incident heavy ion recording in isotropic dielectric detectors

    International Nuclear Information System (INIS)

    Membrey, F.; Chambaudet, A.; Fromm, M.; Saouli, R.

    1990-01-01

    Heavy ion recording in dielectric isotropic detectors has a wide range of applications in such areas as uranium cartography, neutron activation and fission track dating using the external detector method (EDM). It is important to have a good understanding of etch pit evolution during chemical etching. The conical model, which is very often used, is based on a constant track etching velocity (VT). Numerous experiments have shown, however, that VT varies along the damage trail. In this paper, we propose a computer-generated model which simulates the etching process for normal incident ions. The analytical form of VT must be chosen in order to describe as precisely as possible the relationship between etching time (residual range) and the VT value. The conical model only provides a primary approximation which is generally insufficient, especially when performing cartography. (author)

  10. Numerical modelling of seawater intrusion in Shenzhen (China) using a 3D density-dependent model including tidal effects

    Science.gov (United States)

    Lu, Wei; Yang, Qingchun; Martín, Jordi D.; Juncosa, Ricardo

    2013-04-01

    During the 1990s, groundwater overexploitation has resulted in seawater intrusion in the coastal aquifer of the Shenzhen city, China. Although water supply facilities have been improved and alleviated seawater intrusion in recent years, groundwater overexploitation is still of great concern in some local areas. In this work we present a three-dimensional density-dependent numerical model developed with the FEFLOW code, which is aimed at simulating the extent of seawater intrusion while including tidal effects and different groundwater pumping scenarios. Model calibration, using waterheads and reported chloride concentration, has been performed based on the data from 14 boreholes, which were monitored from May 2008 to December 2009. A fairly good fitness between the observed and computed values was obtained by a manual trial-and-error method. Model prediction has been carried out forward 3 years with the calibrated model taking into account high, medium and low tide levels and different groundwater exploitation schemes. The model results show that tide-induced seawater intrusion significantly affects the groundwater levels and concentrations near the estuarine of the Dasha river, which implies that an important hydraulic connection exists between this river and groundwater, even considering that some anti-seepage measures were taken in the river bed. Two pumping scenarios were considered in the calibrated model in order to predict the future changes in the water levels and chloride concentration. The numerical results reveal a decreased tendency of seawater intrusion if groundwater exploitation does not reach an upper bound of about 1.32 × 104 m3/d. The model results provide also insights for controlling seawater intrusion in such coastal aquifer systems.

  11. Evaluation of factors in development of Vis/NIR spectroscopy models for discriminating PSE, DFD and normal broiler breast meat.

    Science.gov (United States)

    Jiang, Hongzhe; Yoon, Seung-Chul; Zhuang, Hong; Wang, Wei; Yang, Yi

    2017-12-01

    1. To evaluate the performance of visible and near-infrared (Vis/NIR) spectroscopic models for discriminating true pale, soft and exudative (PSE), normal and dark, firm and dry (DFD) broiler breast meat in different conditions of preprocessing methods, spectral ranges, characteristic wavelength selection and water-holding capacity (WHC) indexes were assessed. 2. Quality attributes of 214 intact chicken fillets (pectoralis major), such as lightness (L*), pH and WHC indicators including drip loss (DL), water gain and expressible fluid were measured. Fillets were grouped into PSE, normal and DFD categories based on combination of L*, pH and WHC threshold criteria. Classification models were developed using support vector machine based methods on characteristic wavelengths selected from the unprocessed or 2nd-derivative spectra, respectively, in three spectral subsets of 400-2500, 400-1100 and 1100-2500 nm. 3. Better classification of three meat groups was obtained based on unprocessed spectra (72-94%) than 2nd-derivative spectra (55-72%). The classification based on 400-2500 nm (91% average) and 400-1100 nm (89% average) performed better than that on 1100-2500 nm (78% average). In terms of the three different WHC indicators, the combination of L*, pH and DL produced better results than the other two groups, with recognition accuracy of 94.4% using 400-2500-nm range. 4. These analytical results suggest that for a better classification of true PSE, normal and DFD broiler breast meat with Vis/NIR spectra, unprocessed spectra wavelengths should be used, ranges of 400-1000 nm should be included in the data collection, and DL as an indicator of WHC might provide a better prediction model.

  12. A robust macroscopic model for normal-shear coupling, asymmetric and anisotropic behaviors of polycrystalline SMAs

    Science.gov (United States)

    Bodaghi, M.; Damanpack, A. R.; Liao, W. H.

    2016-07-01

    The aim of this article is to develop a robust macroscopic bi-axial model to capture self-accommodation, martensitic transformation/orientation/reorientation, normal-shear deformation coupling and asymmetric/anisotropic strain generation in polycrystalline shape memory alloys. By considering the volume fraction of martensite and its preferred direction as scalar and directional internal variables, constitutive relations are derived to describe basic mechanisms of accommodation, transformation and orientation/reorientation of martensite variants. A new definition is introduced for maximum recoverable strain, which allows the model to capture the effects of tension-compression asymmetry and transformation anisotropy. Furthermore, the coupling effects between normal and shear deformation modes are considered by merging inelastic strain components together. By introducing a calibration approach, material and kinetic parameters of the model are recast in terms of common quantities that characterize a uniaxial phase kinetic diagram. The solution algorithm of the model is presented based on an elastic-predictor inelastic-corrector return mapping process. In order to explore and demonstrate capabilities of the proposed model, theoretical predictions are first compared with existing experimental results on uniaxial tension, compression, torsion and combined tension-torsion tests. Afterwards, experimental results of uniaxial tension, compression, pure bending and buckling tests on {{NiTi}} rods and tubes are replicated by implementing a finite element method along with the Newton-Raphson and Riks techniques to trace non-linear equilibrium path. A good qualitative and quantitative correlation is observed between numerical and experimental results, which verifies the accuracy of the model and the solution procedure.

  13. Evaluation of MCF10A as a Reliable Model for Normal Human Mammary Epithelial Cells.

    Directory of Open Access Journals (Sweden)

    Ying Qu

    Full Text Available Breast cancer is the most common cancer in women and a leading cause of cancer-related deaths for women worldwide. Various cell models have been developed to study breast cancer tumorigenesis, metastasis, and drug sensitivity. The MCF10A human mammary epithelial cell line is a widely used in vitro model for studying normal breast cell function and transformation. However, there is limited knowledge about whether MCF10A cells reliably represent normal human mammary cells. MCF10A cells were grown in monolayer, suspension (mammosphere culture, three-dimensional (3D "on-top" Matrigel, 3D "cell-embedded" Matrigel, or mixed Matrigel/collagen I gel. Suspension culture was performed with the MammoCult medium and low-attachment culture plates. Cells grown in 3D culture were fixed and subjected to either immunofluorescence staining or embedding and sectioning followed by immunohistochemistry and immunofluorescence staining. Cells or slides were stained for protein markers commonly used to identify mammary progenitor and epithelial cells. MCF10A cells expressed markers representing luminal, basal, and progenitor phenotypes in two-dimensional (2D culture. When grown in suspension culture, MCF10A cells showed low mammosphere-forming ability. Cells in mammospheres and 3D culture expressed both luminal and basal markers. Surprisingly, the acinar structure formed by MCF10A cells in 3D culture was positive for both basal markers and the milk proteins β-casein and α-lactalbumin. MCF10A cells exhibit a unique differentiated phenotype in 3D culture which may not exist or be rare in normal human breast tissue. Our results raise a question as to whether the commonly used MCF10A cell line is a suitable model for human mammary cell studies.

  14. Evaluation of MCF10A as a Reliable Model for Normal Human Mammary Epithelial Cells.

    Science.gov (United States)

    Qu, Ying; Han, Bingchen; Yu, Yi; Yao, Weiwu; Bose, Shikha; Karlan, Beth Y; Giuliano, Armando E; Cui, Xiaojiang

    2015-01-01

    Breast cancer is the most common cancer in women and a leading cause of cancer-related deaths for women worldwide. Various cell models have been developed to study breast cancer tumorigenesis, metastasis, and drug sensitivity. The MCF10A human mammary epithelial cell line is a widely used in vitro model for studying normal breast cell function and transformation. However, there is limited knowledge about whether MCF10A cells reliably represent normal human mammary cells. MCF10A cells were grown in monolayer, suspension (mammosphere culture), three-dimensional (3D) "on-top" Matrigel, 3D "cell-embedded" Matrigel, or mixed Matrigel/collagen I gel. Suspension culture was performed with the MammoCult medium and low-attachment culture plates. Cells grown in 3D culture were fixed and subjected to either immunofluorescence staining or embedding and sectioning followed by immunohistochemistry and immunofluorescence staining. Cells or slides were stained for protein markers commonly used to identify mammary progenitor and epithelial cells. MCF10A cells expressed markers representing luminal, basal, and progenitor phenotypes in two-dimensional (2D) culture. When grown in suspension culture, MCF10A cells showed low mammosphere-forming ability. Cells in mammospheres and 3D culture expressed both luminal and basal markers. Surprisingly, the acinar structure formed by MCF10A cells in 3D culture was positive for both basal markers and the milk proteins β-casein and α-lactalbumin. MCF10A cells exhibit a unique differentiated phenotype in 3D culture which may not exist or be rare in normal human breast tissue. Our results raise a question as to whether the commonly used MCF10A cell line is a suitable model for human mammary cell studies.

  15. Median estimation of chemical constituents for sampling on two occasions under a log‐normal model

    Science.gov (United States)

    2015-01-01

    Sampling from a finite population on multiple occasions introduces dependencies between the successive samples when overlap is designed. Such sampling designs lead to efficient statistical estimates, while they allow estimating changes over time for the targeted outcomes. This makes them very popular in real‐world statistical practice. Sampling with partial replacement can also be very efficient in biological and environmental studies where estimation of toxicants and its trends over time is the main interest. Sampling with partial replacement is designed here on two occasions in order to estimate the median concentration of chemical constituents quantified by means of liquid chromatography coupled with tandem mass spectrometry. Such data represent relative peak areas resulting from the chromatographic analysis. They are therefore positive‐valued and skewed data, and are commonly fitted very well by the log‐normal model. A log‐normal model is assumed here for chemical constituents quantified in mainstream cigarette smoke in a real case study. Combining design‐based and model‐based approaches for statistical inference, we seek for the median estimation of chemical constituents by sampling with partial replacement on two time occasions. We also discuss the limitations of extending the proposed approach to other skewed population models. The latter is investigated by means of a Monte Carlo simulation study. PMID:26013679

  16. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  17. Lateral dynamic flight stability of a model hoverfly in normal and inclined stroke-plane hovering

    International Nuclear Information System (INIS)

    Xu, Na; Sun, Mao

    2014-01-01

    Many insects hover with their wings beating in a horizontal plane (‘normal hovering’), while some insects, e.g., hoverflies and dragonflies, hover with inclined stroke-planes. Here, we investigate the lateral dynamic flight stability of a hovering model hoverfly. The aerodynamic derivatives are computed using the method of computational fluid dynamics, and the equations of motion are solved by the techniques of eigenvalue and eigenvector analysis. The following is shown: The flight of the insect is unstable at normal hovering (stroke-plane angle equals 0) and the instability becomes weaker as the stroke-plane angle increases; the flight becomes stable at a relatively large stroke-plane angle (larger than about 24°). As previously shown, the instability at normal hovering is due to a positive roll-moment/side-velocity derivative produced by the ‘changing-LEV-axial-velocity’ effect. When the stroke-plane angle increases, the wings bend toward the back of the body, and the ‘changing-LEV-axial-velocity’ effect decreases; in addition, another effect, called the ‘changing-relative-velocity’ effect (the ‘lateral wind’, which is due to the side motion of the insect, changes the relative velocity of its wings), becomes increasingly stronger. This causes the roll-moment/side-velocity derivative to first decrease and then become negative, resulting in the above change in stability as a function of the stroke-plane angle. (paper)

  18. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  19. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research

    2011-01-15

    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  20. On the quasi-steady aerodynamics of normal hovering flight part II: model implementation and evaluation.

    Science.gov (United States)

    Nabawy, Mostafa R A; Crowther, William J

    2014-05-06

    This paper introduces a generic, transparent and compact model for the evaluation of the aerodynamic performance of insect-like flapping wings in hovering flight. The model is generic in that it can be applied to wings of arbitrary morphology and kinematics without the use of experimental data, is transparent in that the aerodynamic components of the model are linked directly to morphology and kinematics via physical relationships and is compact in the sense that it can be efficiently evaluated for use within a design optimization environment. An important aspect of the model is the method by which translational force coefficients for the aerodynamic model are obtained from first principles; however important insights are also provided for the morphological and kinematic treatments that improve the clarity and efficiency of the overall model. A thorough analysis of the leading-edge suction analogy model is provided and comparison of the aerodynamic model with results from application of the leading-edge suction analogy shows good agreement. The full model is evaluated against experimental data for revolving wings and good agreement is obtained for lift and drag up to 90° incidence. Comparison of the model output with data from computational fluid dynamics studies on a range of different insect species also shows good agreement with predicted weight support ratio and specific power. The validated model is used to evaluate the relative impact of different contributors to the induced power factor for the hoverfly and fruitfly. It is shown that the assumption of an ideal induced power factor (k = 1) for a normal hovering hoverfly leads to a 23% overestimation of the generated force owing to flapping.

  1. Wind turbine condition monitoring based on SCADA data using normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2014-01-01

    turbine can be applied to diagnose similar faults at other turbines automatically via the CMS proposed. A further focus in this paper lies in the process of rule optimization and adoption, allowing the expert to implement the gained knowledge in fault analysis. The fault types diagnosed here are: (1...... signal reconstruction (FSRC) adaptive neuro-fuzzy interference system (ANFIS) normal behavior models (NBM) in combination with fuzzy logic (FL) a setup is developed for data mining of this information. A high degree of automation can be achieved. It is shown that FL rules established with a fault at one...

  2. Including Overweight or Obese Students in Physical Education: A Social Ecological Constraint Model

    Science.gov (United States)

    Li, Weidong; Rukavina, Paul

    2012-01-01

    In this review, we propose a social ecological constraint model to study inclusion of overweight or obese students in physical education by integrating key concepts and assumptions from ecological constraint theory in motor development and social ecological models in health promotion and behavior. The social ecological constraint model proposes…

  3. ETM documentation update – including modelling conventions and manual for software tools

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    , it summarises the work done during 2013, and it also contains presentations for promotion of fusion as a future element in the electricity generation mix and presentations for the modelling community concerning model development and model documentation – in particular for TIAM collaboration workshops....

  4. Modeling aeolian transport of soil-bound plutonium: considering infrequent but normal environmental disturbances is critical in estimating future dose

    International Nuclear Information System (INIS)

    Michelotti, Erika A.; Whicker, Jeffrey J.; Eisele, William F.; Breshears, David D.; Kirchner, Thomas B.

    2013-01-01

    Dose assessments typically consider environmental systems as static through time, but environmental disturbances such as drought and fire are normal, albeit infrequent, events that can impact dose-influential attributes of many environmental systems. These phenomena occur over time frames of decades or longer, and are likely to be exacerbated under projected warmer, drier climate. As with other types of dose assessment, the impacts of environmental disturbances are often overlooked when evaluating dose from aeolian transport of radionuclides and other contaminants. Especially lacking are predictions that account for potential changing vegetation cover effects on radionuclide transport over the long time frames required by regulations. A recently developed dynamic wind-transport model that included vegetation succession and environmental disturbance provides more realistic long-term predictability. This study utilized the model to estimate emission rates for aeolian transport, and compare atmospheric dispersion and deposition rates of airborne plutonium-contaminated soil into neighboring areas with and without environmental disturbances. Specifically, the objective of this study was to utilize the model results as input for a widely used dose assessment model (CAP-88). Our case study focused on low levels of residual plutonium found in soils from past operations at Los Alamos National Laboratory (LANL), in Los Alamos, NM, located in the semiarid southwestern USA. Calculations were conducted for different disturbance scenarios based on conditions associated with current climate, and a potential future drier and warmer climate. Known soil and sediment concentrations of plutonium were used to model dispersal and deposition of windblown residual plutonium, as a function of distance and direction. Environmental disturbances that affected vegetation cover included ground fire, crown fire, and drought, with reoccurrence rates for current climate based on site historical

  5. Serverification of Molecular Modeling Applications: The Rosetta Online Server That Includes Everyone (ROSIE)

    Science.gov (United States)

    Conchúir, Shane Ó.; Der, Bryan S.; Drew, Kevin; Kuroda, Daisuke; Xu, Jianqing; Weitzner, Brian D.; Renfrew, P. Douglas; Sripakdeevong, Parin; Borgo, Benjamin; Havranek, James J.; Kuhlman, Brian; Kortemme, Tanja; Bonneau, Richard; Gray, Jeffrey J.; Das, Rhiju

    2013-01-01

    The Rosetta molecular modeling software package provides experimentally tested and rapidly evolving tools for the 3D structure prediction and high-resolution design of proteins, nucleic acids, and a growing number of non-natural polymers. Despite its free availability to academic users and improving documentation, use of Rosetta has largely remained confined to developers and their immediate collaborators due to the code’s difficulty of use, the requirement for large computational resources, and the unavailability of servers for most of the Rosetta applications. Here, we present a unified web framework for Rosetta applications called ROSIE (Rosetta Online Server that Includes Everyone). ROSIE provides (a) a common user interface for Rosetta protocols, (b) a stable application programming interface for developers to add additional protocols, (c) a flexible back-end to allow leveraging of computer cluster resources shared by RosettaCommons member institutions, and (d) centralized administration by the RosettaCommons to ensure continuous maintenance. This paper describes the ROSIE server infrastructure, a step-by-step ‘serverification’ protocol for use by Rosetta developers, and the deployment of the first nine ROSIE applications by six separate developer teams: Docking, RNA de novo, ERRASER, Antibody, Sequence Tolerance, Supercharge, Beta peptide design, NCBB design, and VIP redesign. As illustrated by the number and diversity of these applications, ROSIE offers a general and speedy paradigm for serverification of Rosetta applications that incurs negligible cost to developers and lowers barriers to Rosetta use for the broader biological community. ROSIE is available at http://rosie.rosettacommons.org. PMID:23717507

  6. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  7. Normal Tissue Complication Probability Modeling of Radiation-Induced Hypothyroidism After Head-and-Neck Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Bakhshandeh, Mohsen [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mahdavi, Seied Rabi Mehdi [Department of Medical Physics, Faculty of Medical Sciences, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Nikoofar, Alireza; Vasheghani, Maryam [Department of Radiation Oncology, Hafte-Tir Hospital, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Kazemnejad, Anoshirvan [Department of Biostatistics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of)

    2013-02-01

    Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented

  8. A finite element propagation model for extracting normal incidence impedance in nonprogressive acoustic wave fields

    Science.gov (United States)

    Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.

    1995-01-01

    A propagation model method for extracting the normal incidence impedance of an acoustic material installed as a finite length segment in a wall of a duct carrying a nonprogressive wave field is presented. The method recasts the determination of the unknown impedance as the minimization of the normalized wall pressure error function. A finite element propagation model is combined with a coarse/fine grid impedance plane search technique to extract the impedance of the material. Results are presented for three different materials for which the impedance is known. For each material, the input data required for the prediction scheme was computed from modal theory and then contaminated by random error. The finite element method reproduces the known impedance of each material almost exactly for random errors typical of those found in many measurement environments. Thus, the method developed here provides a means for determining the impedance of materials in a nonprogressirve wave environment such as that usually encountered in a commercial aircraft engine and most laboratory settings.

  9. One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor

    International Nuclear Information System (INIS)

    Liu Lisheng; Zhang Qingjie; Zhai Pengcheng; Cao Dongfeng

    2008-01-01

    An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic and considering the effect of strain rate on the dynamic yield strength

  10. A Platoon Dispersion Model Based on a Truncated Normal Distribution of Speed

    Directory of Open Access Journals (Sweden)

    Ming Wei

    2012-01-01

    Full Text Available Understanding platoon dispersion is critical for the coordination of traffic signal control in an urban traffic network. Assuming that platoon speed follows a truncated normal distribution, ranging from minimum speed to maximum speed, this paper develops a piecewise density function that describes platoon dispersion characteristics as the platoon moves from an upstream to a downstream intersection. Based on this density function, the expected number of cars in the platoon that pass the downstream intersection, and the expected number of cars in the platoon that do not pass the downstream point are calculated. To facilitate coordination in a traffic signal control system, dispersion models for the front and the rear of the platoon are also derived. Finally, a numeric computation for the coordination of successive signals is presented to illustrate the validity of the proposed model.

  11. Transverse Crack Modeling and Validation in Rotor Systems, Including Thermal Effects

    Directory of Open Access Journals (Sweden)

    N. Bachschmid

    2003-01-01

    Full Text Available This article describes a model that allows the simulation of the static behavior of a transverse crack in a horizontal rotor under the action of weight and other possible static loads and the dynamic behavior of cracked rotating shaft. The crack breathes—that is, the mechanism of the crack's opening and closing is ruled by the stress on the cracked section exerted by the external loads. In a rotor, the stresses are time-dependent and have a period equal to the period of rotation; thus, the crack periodically breathes. An original, simplified model allows cracks of various shapes to be modeled and thermal stresses to be taken into account, as they may influence the opening and closing mechanism. The proposed method was validated by using two criteria. First the crack's breathing mechanism, simulated by the model, was compared with the results obtained by a nonlinear, threedimensional finite element model calculation, and a good agreement in the results was observed. Then the proposed model allowed the development of the equivalent cracked beam. The results of this model were compared with those obtained by the three-dimensional finite element model. Also in this case, there was a good agreement in the results.

  12. A model for firm-specific strategic wisdom : including illustrations and 49 guiding questions

    NARCIS (Netherlands)

    van Straten, Roeland Peter

    2017-01-01

    This PhD thesis provides an answer to the question ‘How may one think strategically’. It does so by presenting a new prescriptive ‘Model for Firm-Specific Strategic Wisdom’. This Model aims to guide any individual strategist in his or her thinking from a state of firm-specific ‘ignorance’ to a state

  13. Numerical models of single- and double-negative metamaterials including viscous and thermal losses

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Sánchez-Dehesa, José

    2017-01-01

    detailed understanding on how viscous and thermal losses affect the setups at different frequencies. The modeling of a simpler single-negative metamaterial also broadens this overview. Both setups have been modeled with quadratic BEM meshes. Each sample, scaled at two different sizes, has been represented...

  14. Gasification of biomass in a fixed bed downdraft gasifier--a realistic model including tar.

    Science.gov (United States)

    Barman, Niladri Sekhar; Ghosh, Sudip; De, Sudipta

    2012-03-01

    This study presents a model for fixed bed downdraft biomass gasifiers considering tar also as one of the gasification products. A representative tar composition along with its mole fractions, as available in the literature was used as an input parameter within the model. The study used an equilibrium approach for the applicable gasification reactions and also considered possible deviations from equilibrium to further upgrade the equilibrium model to validate a range of reported experimental results. Heat balance was applied to predict the gasification temperature and the predicted values were compared with reported results in literature. A comparative study was made with some reference models available in the literature and also with experimental results reported in the literature. Finally a predicted variation of performance of the gasifier by this validated model for different air-fuel ratio and moisture content was also discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. A note on the Fisher information matrix for the skew-generalized-normal model

    OpenAIRE

    Arellano-Valle, Reinaldo B.; Gómez, Héctor W.; Salinas, Hugo S.

    2013-01-01

    In this paper, the exact form of the Fisher information matrix for the skew-generalized normal (SGN) distribution is determined. The existence of singularity problems of this matrix for the skew-normal and normal particular cases is investigated. Special attention is given to the asymptotic properties of the MLEs under the skew-normality hypothesis. Peer Reviewed

  16. Modeling aeolian transport of soil-bound plutonium: considering infrequent but normal environmental disturbances is critical in estimating future dose.

    Science.gov (United States)

    Michelotti, Erika A; Whicker, Jeffrey J; Eisele, William F; Breshears, David D; Kirchner, Thomas B

    2013-06-01

    Dose assessments typically consider environmental systems as static through time, but environmental disturbances such as drought and fire are normal, albeit infrequent, events that can impact dose-influential attributes of many environmental systems. These phenomena occur over time frames of decades or longer, and are likely to be exacerbated under projected warmer, drier climate. As with other types of dose assessment, the impacts of environmental disturbances are often overlooked when evaluating dose from aeolian transport of radionuclides and other contaminants. Especially lacking are predictions that account for potential changing vegetation cover effects on radionuclide transport over the long time frames required by regulations. A recently developed dynamic wind-transport model that included vegetation succession and environmental disturbance provides more realistic long-term predictability. This study utilized the model to estimate emission rates for aeolian transport, and compare atmospheric dispersion and deposition rates of airborne plutonium-contaminated soil into neighboring areas with and without environmental disturbances. Specifically, the objective of this study was to utilize the model results as input for a widely used dose assessment model (CAP-88). Our case study focused on low levels of residual plutonium found in soils from past operations at Los Alamos National Laboratory (LANL), in Los Alamos, NM, located in the semiarid southwestern USA. Calculations were conducted for different disturbance scenarios based on conditions associated with current climate, and a potential future drier and warmer climate. Known soil and sediment concentrations of plutonium were used to model dispersal and deposition of windblown residual plutonium, as a function of distance and direction. Environmental disturbances that affected vegetation cover included ground fire, crown fire, and drought, with reoccurrence rates for current climate based on site historical

  17. A viscoplastic model including anisotropic damage for the time dependent behaviour of rock

    Science.gov (United States)

    Pellet, F.; Hajdu, A.; Deleruyelle, F.; Besnus, F.

    2005-08-01

    This paper presents a new constitutive model for the time dependent mechanical behaviour of rock which takes into account both viscoplastic behaviour and evolution of damage with respect to time. This model is built by associating a viscoplastic constitutive law to the damage theory. The main characteristics of this model are the account of a viscoplastic volumetric strain (i.e. contractancy and dilatancy) as well as the anisotropy of damage. The latter is described by a second rank tensor. Using this model, it is possible to predict delayed rupture by determining time to failure, in creep tests for example. The identification of the model parameters is based on experiments such as creep tests, relaxation tests and quasi-static tests. The physical meaning of these parameters is discussed and comparisons with lab tests are presented. The ability of the model to reproduce the delayed failure observed in tertiary creep is demonstrated as well as the sensitivity of the mechanical response to the rate of loading. The model could be used to simulate the evolution of the excavated damage zone around underground openings.

  18. Potential transformation of trace species including aircraft exhaust in a cloud environment. The `Chedrom model`

    Energy Technology Data Exchange (ETDEWEB)

    Ozolin, Y.E.; Karol, I.L. [Main Geophysical Observatory, St. Petersburg (Russian Federation); Ramaroson, R. [Office National d`Etudes et de Recherches Aerospatiales (ONERA), 92 - Chatillon (France)

    1997-12-31

    Box model for coupled gaseous and aqueous phases is used for sensitivity study of potential transformation of trace gases in a cloud environment. The rate of this transformation decreases with decreasing of pH in droplets, with decreasing of photodissociation rates inside the cloud and with increasing of the droplet size. Model calculations show the potential formation of H{sub 2}O{sub 2} in aqueous phase and transformation of gaseous HNO{sub 3} into NO{sub x} in a cloud. This model is applied for exploration of aircraft exhausts evolution in plume inside a cloud. (author) 10 refs.

  19. A Two-Dimensional Modeling Procedure to Estimate the Loss Equivalent Resistance Including the Saturation Effect

    Directory of Open Access Journals (Sweden)

    Rosa Ana Salas

    2013-11-01

    Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.

  20. Distributional Assumptions in Educational Assessments Analysis: Normal Distributions versus Generalized Beta Distribution in Modeling the Phenomenon of Learning

    Science.gov (United States)

    Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire

    2013-01-01

    This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…

  1. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our...

  2. A Lumped Thermal Model Including Thermal Coupling and Thermal Boundary Conditions for High Power IGBT Modules

    DEFF Research Database (Denmark)

    Bahman, Amir Sajjad; Ma, Ke; Blaabjerg, Frede

    2018-01-01

    Detailed thermal dynamics of high power IGBT modules are important information for the reliability analysis and thermal design of power electronic systems. However, the existing thermal models have their limits to correctly predict these complicated thermal behavior in the IGBTs: The typically used...... thermal model based on one-dimensional RC lumps have limits to provide temperature distributions inside the device, moreover some variable factors in the real-field applications like the cooling and heating conditions of the converter cannot be adapted. On the other hand, the more advanced three......-dimensional thermal models based on Finite Element Method (FEM) need massive computations, which make the long-term thermal dynamics difficult to calculate. In this paper, a new lumped three-dimensional thermal model is proposed, which can be easily characterized from FEM simulations and can acquire the critical...

  3. Advanced Modeling of Ramp Operations including Departure Status at Secondary Airports, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project addresses three modeling elements relevant to NASA's IADS research and ATD-2 project, two related to ramp operations at primary airports and one related...

  4. Extending the Scope of the Acculturation/Pidginization Model to Include Cognition.

    Science.gov (United States)

    Schumann, John H.

    1990-01-01

    Examines five cognitive models for second-language acquisition (SLA) and assesses how each might account for the Pidginized interlanguage found in the early stages of second-language acquisition. (23 references) (JL)

  5. An integrated computable general equilibrium model including multiple types and uses of water

    OpenAIRE

    Luckmann, Jonas Jens

    2015-01-01

    Water is a scarce resource in many regions of the world and competition for water is an increasing problem. To countervail this trend policies are needed regulating supply and demand for water. As water is used in many economic activities, water related management decisions usually have complex implications. Economic simulation models have been proven useful to ex-ante assess the consequences of policy changes. Specifically, Computable General Equilibrium (CGE) models are very suitable to ana...

  6. Transverse Crack Modeling and Validation in Rotor Systems Including Thermal Effects

    Directory of Open Access Journals (Sweden)

    N. Bachschmid

    2004-01-01

    Full Text Available In this article, a model is described that allows one to simulate the static behavior of a transversal crack in a horizontal rotor, under the action of the weight and other possible static loads and the dynamical behavior of the rotating cracked shaft. The crack “breaths,” i.e., the mechanism of opening and closing of the crack, is ruled by the stress acting on the cracked section due to the external loads; in a rotor the stress is time-depending with a period equal to the period of rotation, thus the crack “periodically breaths.” An original simplified model is described that allows cracks of different shape to be modeled and thermal stresses to be taken into account, since they may influence the opening and closing mechanism. The proposed method has been validated using two criteria. Firstly, the crack “breathing” mechanism, simulated with the model, has been compared with the results obtained by a nonlinear 3-D FEM calculation and a good agreement in the results has been observed. Secondly, the proposed model allows the development of the equivalent cracked beam. The results of this model are compared with those obtained by the above-mentioned 3-D FEM. There is a good agreement in the results, of this case as well.

  7. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation

    Science.gov (United States)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2011-12-01

    Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.

  8. A Novel Mean-Value Model of the Cardiovascular System Including a Left Ventricular Assist Device.

    Science.gov (United States)

    Ochsner, Gregor; Amacher, Raffael; Schmid Daners, Marianne

    2017-06-01

    Time-varying elastance models (TVEMs) are often used for simulation studies of the cardiovascular system with a left ventricular assist device (LVAD). Because these models are computationally expensive, they cannot be used for long-term simulation studies. In addition, their equilibria are periodic solutions, which prevent the extraction of a linear time-invariant model that could be used e.g. for the design of a physiological controller. In the current paper, we present a new type of model to overcome these problems: the mean-value model (MVM). The MVM captures the behavior of the cardiovascular system by representative mean values that do not change within the cardiac cycle. For this purpose, each time-varying element is manually converted to its mean-value counterpart. We compare the derived MVM to a similar TVEM in two simulation experiments. In both cases, the MVM is able to fully capture the inter-cycle dynamics of the TVEM. We hope that the new MVM will become a useful tool for researchers working on physiological control algorithms. This paper provides a plant model that enables for the first time the use of tools from classical control theory in the field of physiological LVAD control.

  9. The joint structure of normal and pathological personality: further evidence for a dimensional model.

    Science.gov (United States)

    Hengartner, Michael P; Ajdacic-Gross, Vladeta; Rodgers, Stephanie; Müller, Mario; Rössler, Wulf

    2014-04-01

    The literature proposes a joint structure of normal and pathological personality with higher-order factors mainly based on the five-factor model of personality (FFM). The purpose of the present study was to examine the joint structure of the FFM and the DSM-IV personality disorders (PDs) and to discuss this structure with regard to higher-order domains commonly reported in the literature. We applied a canonical correlation analysis, a series of principal component analyses with oblique Promax rotation and a bi-factor analysis with Geomin rotation on 511 subjects of the general population of Zurich, Switzerland, using data from the ZInEP Epidemiology Survey. The 5 FFM traits and the 10 DSM-IV PD dimensions shared 77% of total variance. Component extraction tests pointed towards a two- and three-component solution. The two-component solution comprised a first component with strong positive loadings on neuroticism and all 10 PD dimensions and a second component with strong negative loadings on extraversion and openness and positive loadings on schizoid and avoidant PDs. The three-component solution added a third component with strong positive loadings on conscientiousness and agreeableness and a negative loading on antisocial PD. The bi-factor model provided evidence for 1 general personality dysfunction factor related to neuroticism and 5 group factors, although the interpretability of the latter was limited. Normal and pathological personality domains are not isomorphic or superposable, although they share a substantial proportion of variance. The two and three higher-order domains extracted in the present study correspond well to equivalent factor-solutions reported in the literature. Moreover, these superordinate factors can consistently be integrated within a hierarchical structure of alternative four- and five-factor models. The top of the hierarchy presumably constitutes a general personality dysfunction factor which is closely related to neuroticism. © 2014.

  10. Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities

    International Nuclear Information System (INIS)

    Waite, D.A.; Denham, D.H.

    1975-01-01

    The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these

  11. Pancreatic Response to Gold Nanoparticles Includes Decrease of Oxidative Stress and Inflammation In Autistic Diabetic Model

    Directory of Open Access Journals (Sweden)

    Manar E. Selim

    2015-01-01

    Full Text Available Background: Gold nanoparticles (AuNPs have a wide range of applications in various fields. This study provides an understanding of the modulatory effects of AuNPs on an antioxidant system in male Wistar diabetic rats with autism spectrum disorder (ASD. Normal littermates fed by control mothers were injected with citrate buffer alone and served as normal, untreated controls controlin this study. Diabetes mellitus (DM was induced by administering a single intraperitoneal injection of streptozotocin (STZ (100 mg/kg to the pups of (ND diabetic group, which had been fasted overnight. Autistic pups from mothers that had received a single intraperitoneal injection of 600 mg/kg sodium valproate on day 12.5 after conception were randomly divided into 2 groups (n 2 7/group as follow; administering single intraperitoneal injection of streptozotocin (STZ ( (100 mg/kg to the overnight fasted autistic pups of (AD autistic diabetic group. The treatment was started on the 5th day after STZ injection with the same dose as in group II and it was considered as 1st day of treatment with gold nanoparticles for 7 days to each rat of (group IV treated autistic diabetic group(TAD at a dosage of 2.5 mg/kg. b. wt. Results: At this dose of administration AuNPs, the activities of hepatic superoxide dismutase (SOD, glutathione peroxidase (GPx, and catalase were greater in group TAD compared with the control group (P 0.05 in the liver of autistic diabetic AuNPs -supplemented rats, whereas reduced glutathione was markedly higher than in control rats, especially after administration of AuNPs. Moreover, the kidney functions in addition to the fat profile scoring supported the protective potential of that dose of AuNPs. The beta cells revealed euchromatic nuclei with no evidence of separation of nuclear membrane. Conclusions: Our results showed that AuNPs improved many of the oxidative stress parameters (SOD, GPx and, CAT, plasma antioxidant capacity (ORAC and lipid profile

  12. Modeling the Philippines' real gross domestic product: A normal estimation equation for multiple linear regression

    Science.gov (United States)

    Urrutia, Jackie D.; Tampis, Razzcelle L.; Mercado, Joseph; Baygan, Aaron Vito M.; Baccay, Edcon B.

    2016-02-01

    The objective of this research is to formulate a mathematical model for the Philippines' Real Gross Domestic Product (Real GDP). The following factors are considered: Consumers' Spending (x1), Government's Spending (x2), Capital Formation (x3) and Imports (x4) as the Independent Variables that can actually influence in the Real GDP in the Philippines (y). The researchers used a Normal Estimation Equation using Matrices to create the model for Real GDP and used α = 0.01.The researchers analyzed quarterly data from 1990 to 2013. The data were acquired from the National Statistical Coordination Board (NSCB) resulting to a total of 96 observations for each variable. The data have undergone a logarithmic transformation particularly the Dependent Variable (y) to satisfy all the assumptions of the Multiple Linear Regression Analysis. The mathematical model for Real GDP was formulated using Matrices through MATLAB. Based on the results, only three of the Independent Variables are significant to the Dependent Variable namely: Consumers' Spending (x1), Capital Formation (x3) and Imports (x4), hence, can actually predict Real GDP (y). The regression analysis displays that 98.7% (coefficient of determination) of the Independent Variables can actually predict the Dependent Variable. With 97.6% of the result in Paired T-Test, the Predicted Values obtained from the model showed no significant difference from the Actual Values of Real GDP. This research will be essential in appraising the forthcoming changes to aid the Government in implementing policies for the development of the economy.

  13. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    Science.gov (United States)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  14. Including fluid shear viscosity in a structural acoustic finite element model using a scalar fluid representation

    Science.gov (United States)

    Cheng, Lei; Li, Yizeng; Grosh, Karl

    2013-01-01

    An approximate boundary condition is developed in this paper to model fluid shear viscosity at boundaries of coupled fluid-structure system. The effect of shear viscosity is approximated by a correction term to the inviscid boundary condition, written in terms of second order in-plane derivatives of pressure. Both thin and thick viscous boundary layer approximations are formulated; the latter subsumes the former. These approximations are used to develop a variational formation, upon which a viscous finite element method (FEM) model is based, requiring only minor modifications to the boundary integral contributions of an existing inviscid FEM model. Since this FEM formulation has only one degree of freedom for pressure, it holds a great computational advantage over the conventional viscous FEM formulation which requires discretization of the full set of linearized Navier-Stokes equations. The results from thick viscous boundary layer approximation are found to be in good agreement with the prediction from a Navier-Stokes model. When applicable, thin viscous boundary layer approximation also gives accurate results with computational simplicity compared to the thick boundary layer formulation. Direct comparison of simulation results using the boundary layer approximations and a full, linearized Navier-Stokes model are made and used to evaluate the accuracy of the approximate technique. Guidelines are given for the parameter ranges over which the accurate application of the thick and thin boundary approximations can be used for a fluid-structure interaction problem. PMID:23729844

  15. Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes

    Energy Technology Data Exchange (ETDEWEB)

    García-Gen, Santiago [Department of Chemical Engineering, Institute of Technology, University of Santiago de Compostela, 15782 Santiago de Compostela (Spain); Sousbie, Philippe; Rangaraj, Ganesh [INRA, UR50, Laboratoire de Biotechnologie de l’Environnement, Avenue des Etangs, Narbonne F-11100 (France); Lema, Juan M. [Department of Chemical Engineering, Institute of Technology, University of Santiago de Compostela, 15782 Santiago de Compostela (Spain); Rodríguez, Jorge, E-mail: jrodriguez@masdar.ac.ae [Department of Chemical Engineering, Institute of Technology, University of Santiago de Compostela, 15782 Santiago de Compostela (Spain); Institute Centre for Water and Environment (iWater), Masdar Institute of Science and Technology, PO Box 54224 Abu Dhabi (United Arab Emirates); Steyer, Jean-Philippe; Torrijos, Michel [INRA, UR50, Laboratoire de Biotechnologie de l’Environnement, Avenue des Etangs, Narbonne F-11100 (France)

    2015-01-15

    Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.

  16. Results of including geometric nonlinearities in an aeroelastic model of an F/A-18

    Science.gov (United States)

    Buttrill, Carey S.

    1989-01-01

    An integrated, nonlinear simulation model suitable for aeroelastic modeling of fixed-wing aircraft has been developed. While the author realizes that the subject of modeling rotating, elastic structures is not closed, it is believed that the equations of motion developed and applied herein are correct to second order and are suitable for use with typical aircraft structures. The equations are not suitable for large elastic deformation. In addition, the modeling framework generalizes both the methods and terminology of non-linear rigid-body airplane simulation and traditional linear aeroelastic modeling. Concerning the importance of angular/elastic inertial coupling in the dynamic analysis of fixed-wing aircraft, the following may be said. The rigorous inclusion of said coupling is not without peril and must be approached with care. In keeping with the same engineering judgment that guided the development of the traditional aeroelastic equations, the effect of non-linear inertial effects for most airplane applications is expected to be small. A parameter does not tell the whole story, however, and modes flagged by the parameter as significant also need to be checked to see if the coupling is not a one-way path, i.e., the inertially affected modes can influence other modes.

  17. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    Science.gov (United States)

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  18. Modelling of bypass transition including the pseudolaminar part of the boundary layer

    Energy Technology Data Exchange (ETDEWEB)

    Prihoda, J.; Hlava, T. [Ceska Akademie Ved, Prague (Czech Republic). Inst. of Thermomechanics; Kozel, K. [Ceske Vysoke Uceni Technicke, Prague (Czech Republic). Faculty of Mechanical Engineering

    1999-12-01

    The boundary-layer transition in turbomachinery is accelerated by a number of parameters, especially by the free-stream turbulence. This so-called bypass transition is usually modelled by means of one-equation or two-equation turbulence models based on turbulent viscosity. Using of transport equations for turbulent energy and for dissipation rate in these models is questionable before the onset of the last stage of the transition, i.e. before the formation of turbulent spots. Used approximations of production and turbulent diffusion are the weak points of turbulence models with turbulent viscosity in the pseudolaminar boundary layer, as the Boussinesq assumption on turbulent viscosity is not fulfilled in this part of the boundary layer. In order to obtain a more reliable prediction of the transitional boundary layer, Mayle and Schulz (1997) proposed for the solution of pseudolaminar boundary layer a special `laminar-kinetic-energy` equation based on the analysis of laminar boundary layer in flows with velocity fluctuations. The effect of production and turbulent diffusion on the development of turbulent energy in the pseudolaminar boundary layer was tested using a two-layer turbulence model. (orig.)

  19. Modelling of bypass transition including the pseudolaminar part of the boundary layer

    Energy Technology Data Exchange (ETDEWEB)

    Prihoda, J.; Hlava, T. (Ceska Akademie Ved, Prague (Czech Republic). Inst. of Thermomechanics); Kozel, K. (Ceske Vysoke Uceni Technicke, Prague (Czech Republic). Faculty of Mechanical Engineering)

    1999-01-01

    The boundary-layer transition in turbomachinery is accelerated by a number of parameters, especially by the free-stream turbulence. This so-called bypass transition is usually modelled by means of one-equation or two-equation turbulence models based on turbulent viscosity. Using of transport equations for turbulent energy and for dissipation rate in these models is questionable before the onset of the last stage of the transition, i.e. before the formation of turbulent spots. Used approximations of production and turbulent diffusion are the weak points of turbulence models with turbulent viscosity in the pseudolaminar boundary layer, as the Boussinesq assumption on turbulent viscosity is not fulfilled in this part of the boundary layer. In order to obtain a more reliable prediction of the transitional boundary layer, Mayle and Schulz (1997) proposed for the solution of pseudolaminar boundary layer a special 'laminar-kinetic-energy' equation based on the analysis of laminar boundary layer in flows with velocity fluctuations. The effect of production and turbulent diffusion on the development of turbulent energy in the pseudolaminar boundary layer was tested using a two-layer turbulence model. (orig.)

  20. Sonic hedgehog in normal and neoplastic proliferation: insight gained from human tumors and animal models.

    Science.gov (United States)

    Wetmore, Cynthia

    2003-02-01

    Cancer arises when a cell accumulates multiple genetic changes that allow it to elude the highly regulated balance between proliferation and apoptosis that an organism employs to suppress inappropriate growth. It has become evident that malignant transformation of a cell or group of cells often involves pathways that are active during normal development but are inappropriately regulated in neoplastic proliferation. Signaling via the Sonic hedgehog pathway is critical to vertebrate development and also appears to play an integral role in the initiation and propagation of some tumors of the muscle, skin and nervous system. Analyses of human tumors have revealed mutations in various components of the Sonic hedgehog signaling pathway that appear to result in the activation of this pathway, as inferred by the increased expression of the transcription factor, Gli1. Interestingly, a proportion of the human tumors and most of those arising in mouse models continue to express the normal Patched allele, suggesting the involvement of additional molecular events in the transformation of the haploinsufficient cells.

  1. Erosion of Fe-W model system under normal and oblige D ion irradiation

    Directory of Open Access Journals (Sweden)

    Bernhard M. Berger

    2017-08-01

    Full Text Available The dynamic erosion behaviour of iron-tungsten (Fe-W model films (with 1.5 at% W resulting from 250eV deuterium (D irradiation is investigated under well-defined laboratory conditions. For three different impact angles (0°, 45° and 60° with respect to the surface normal the erosion yield is monitored as a function of incident D fluence using a highly sensitive quartz crystal microbalance technique (QCM. In addition the evolution of the Fe-W film topography and roughness with increasing fluence is observed using an atomic force microscope (AFM. The mass removal rate for Fe-W is found to be comparable to the value of a pure Fe film at low incident fluences but strongly decreases with increasing D fluence. This is consistent with earlier observations of a substantial W enrichment at the surface due to preferential Fe sputtering. The reduction of the mass removal rate is initially more pronounced for irradiation under oblique angles as compared to normal incidence, but the differences vanish for fluences > 2·1023D/m². High resolution AFM images reveal that continued ion irradiation leads to significant surface roughening and (depending on ion impact angle formation of nanodots or nano-ripples. This may indicate that the W enrichment at the surface due to preferential sputtering of Fe is not exclusively responsible for the observed reduction in erosion with increasing D fluence.

  2. Beam dynamics in high intensity cyclotrons including neighboring bunch effects: Model, implementation, and application

    Directory of Open Access Journals (Sweden)

    J. J. Yang (杨建俊

    2010-06-01

    Full Text Available Space-charge effects, being one of the most significant collective effects, play an important role in high intensity cyclotrons. However, for cyclotrons with small turn separation, other existing effects are of equal importance. Interactions of radially neighboring bunches are also present, but their combined effects have not yet been investigated in any great detail. In this paper, a new particle in the cell-based self-consistent numerical simulation model is presented for the first time. The model covers neighboring bunch effects and is implemented in the three-dimensional object-oriented parallel code OPAL-cycl, a flavor of the OPAL framework. We discuss this model together with its implementation and validation. Simulation results are presented from the PSI 590 MeV ring cyclotron in the context of the ongoing high intensity upgrade program, which aims to provide a beam power of 1.8 MW (CW at the target destination.

  3. Finding practical phenomenological models that include both photoresist behavior and etch process effects

    Science.gov (United States)

    Jung, Sunwook; Do, Thuy; Sturtevant, John

    2015-03-01

    For more than five decades, the semiconductor industry has overcome technology challenges with innovative ideas that have continued to enable Moore's Law. It is clear that multi-patterning lithography is vital for 20nm half pitch using 193i. Multi-patterning exposure sequences and pattern multiplication processes can create complicated tolerance accounting due to the variability associated with the component processes. It is essential to ensure good predictive accuracy of compact etch models used in multipatterning simulation. New modelforms have been developed to account for etch bias behavior at 20 nm and below. The new modeling components show good results in terms of global fitness and some improved predication capability for specific features. We've also investigated a new methodology to make the etch model aware of 3D resist profiles.

  4. A Simple Model of Fields Including the Strong or Nuclear Force and a Cosmological Speculation

    Directory of Open Access Journals (Sweden)

    David L. Spencer

    2016-10-01

    Full Text Available Reexamining the assumptions underlying the General Theory of Relativity and calling an object's gravitational field its inertia, and acceleration simply resistance to that inertia, yields a simple field model where the potential (kinetic energy of a particle at rest is its capacity to move itself when its inertial field becomes imbalanced. The model then attributes electromagnetic and strong forces to the effects of changes in basic particle shape. Following up on the model's assumption that the relative intensity of a particle's gravitational field is always inversely related to its perceived volume and assuming that all black holes spin, may create the possibility of a cosmic rebound where a final spinning black hole ends with a new Big Bang.

  5. Modeling of the dynamics of wind to power conversion including high wind speed behavior

    DEFF Research Database (Denmark)

    Litong-Palima, Marisciel; Bjerge, Martin Huus; Cutululis, Nicolaos Antonio

    2016-01-01

    This paper proposes and validates an efficient, generic and computationally simple dynamic model for the conversion of the wind speed at hub height into the electrical power by a wind turbine. This proposed wind turbine model was developed as a first step to simulate wind power time series...... speed shutdowns and restarts are represented as on–off switching rules that govern the output of the wind turbine at extreme wind speed conditions. The model uses the concept of equivalent wind speed, estimated from the single point (hub height) wind speed using a second-order dynamic filter...... measurements available from the DONG Energy offshore wind farm Horns Rev 2. Copyright © 2015 John Wiley & Sons, Ltd....

  6. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  7. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  8. Evaluation of European air quality modelled by CAMx including the volatility basis set scheme

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2016-08-01

    Full Text Available Four periods of EMEP (European Monitoring and Evaluation Programme intensive measurement campaigns (June 2006, January 2007, September–October 2008 and February–March 2009 were modelled using the regional air quality model CAMx with VBS (volatility basis set approach for the first time in Europe within the framework of the EURODELTA-III model intercomparison exercise. More detailed analysis and sensitivity tests were performed for the period of February–March 2009 and June 2006 to investigate the uncertainties in emissions as well as to improve the modelling of organic aerosol (OA. Model performance for selected gas phase species and PM2.5 was evaluated using the European air quality database AirBase. Sulfur dioxide (SO2 and ozone (O3 were found to be overestimated for all the four periods, with O3 having the largest mean bias during June 2006 and January–February 2007 periods (8.9 pbb and 12.3 ppb mean biases respectively. In contrast, nitrogen dioxide (NO2 and carbon monoxide (CO were found to be underestimated for all the four periods. CAMx reproduced both total concentrations and monthly variations of PM2.5 for all the four periods with average biases ranging from −2.1 to 1.0 µg m−3. Comparisons with AMS (aerosol mass spectrometer measurements at different sites in Europe during February–March 2009 showed that in general the model overpredicts the inorganic aerosol fraction and underpredicts the organic one, such that the good agreement for PM2.5 is partly due to compensation of errors. The effect of the choice of VBS scheme on OA was investigated as well. Two sensitivity tests with volatility distributions based on previous chamber and ambient measurements data were performed. For February–March 2009 the chamber case reduced the total OA concentrations by about 42 % on average. In contrast, a test based on ambient measurement data increased OA concentrations by about 42 % for the same period bringing

  9. Molecular Modeling of Aerospace Polymer Matrices Including Carbon Nanotube-Enhanced Epoxy

    Science.gov (United States)

    Radue, Matthew S.

    Carbon fiber (CF) composites are increasingly replacing metals used in major structural parts of aircraft, spacecraft, and automobiles. The current limitations of carbon fiber composites are addressed through computational material design by modeling the salient aerospace matrix materials. Molecular Dynamics (MD) models of epoxies with and without carbon nanotube (CNT) reinforcement and models of pure bismaleimides (BMIs) were developed to elucidate structure-property relationships for improved selection and tailoring of matrices. The influence of monomer functionality on the mechanical properties of epoxies is studied using the Reax Force Field (ReaxFF). From deformation simulations, the Young's modulus, yield point, and Poisson's ratio are calculated and analyzed. The results demonstrate an increase in stiffness and yield strength with increasing resin functionality. Comparison between the network structures of distinct epoxies is further advanced by the Monomeric Degree Index (MDI). Experimental validation demonstrates the MD results correctly predict the relationship in Young's moduli for all epoxies modeled. Therefore, the ReaxFF is confirmed to be a useful tool for studying the mechanical behavior of epoxies. While epoxies have been well-studied using MD, there has been no concerted effort to model cured BMI polymers due to the complexity of the network-forming reactions. A novel, adaptable crosslinking framework is developed for implementing 5 distinct cure reactions of Matrimid-5292 (a BMI resin) and investigating the network structure using MD simulations. The influence of different cure reactions and extent of curing are analyzed on the several thermo-mechanical properties such as mass density, glass transition temperature, coefficient of thermal expansion, elastic moduli, and thermal conductivity. The developed crosslinked models correctly predict experimentally observed trends for various properties. Finally, the epoxies modeled (di-, tri-, and tetra

  10. Modelling of safety barriers including human and organisational factors to improve process safety

    DEFF Research Database (Denmark)

    Markert, Frank; Duijm, Nijs Jan; Thommesen, Jacob

    2013-01-01

    Assessment Methodology for IndustrieS, see Salvi et al 2006). ARAMIS employs the bow-tie approach to modelling hazardous scenarios, and it suggests the outcome of auditing safety management to be connected to a semi-quantitative assessment of the quality of safety barriers. ARAMIS discriminates a number...... of safety barrier (passive, automated, or involving human action). Such models are valuable for many purposes, but are difficult to apply to more complex situations, as the influences are to be set individually for each barrier. The approach described in this paper is trying to improve the state...

  11. The economic production lot size model extended to include more than one production rate

    DEFF Research Database (Denmark)

    Larsen, Christian

    2005-01-01

    We study an extension of the economic production lot size model, where more than one production rate can be used during a cycle. Moreover, the production rates, as well as their corresponding runtimes are decision variables. We decompose the problem into two subproblems. First, we show that all...

  12. The economic production lot size model extended to include more than one production rate

    DEFF Research Database (Denmark)

    Larsen, Christian

    2001-01-01

    We study an extension of the economic production lot size model, where more than one production rate can be used during a cycle. Moreover, the production rates, as well as their corresponding runtimes are decision variables. First, we show that all production rates should be choosen in the interval...

  13. Static aeroelastic analysis including geometric nonlinearities based on reduced order model

    Directory of Open Access Journals (Sweden)

    Changchuan Xie

    2017-04-01

    Full Text Available This paper describes a method proposed for modeling large deflection of aircraft in nonlinear aeroelastic analysis by developing reduced order model (ROM. The method is applied for solving the static aeroelastic and static aeroelastic trim problems of flexible aircraft containing geometric nonlinearities; meanwhile, the non-planar effects of aerodynamics and follower force effect have been considered. ROMs are computational inexpensive mathematical representations compared to traditional nonlinear finite element method (FEM especially in aeroelastic solutions. The approach for structure modeling presented here is on the basis of combined modal/finite element (MFE method that characterizes the stiffness nonlinearities and we apply that structure modeling method as ROM to aeroelastic analysis. Moreover, the non-planar aerodynamic force is computed by the non-planar vortex lattice method (VLM. Structure and aerodynamics can be coupled with the surface spline method. The results show that both of the static aeroelastic analysis and trim analysis of aircraft based on structure ROM can achieve a good agreement compared to analysis based on the FEM and experimental result.

  14. Dusty Plasma Modeling of the Fusion Reactor Sheath Including Collisional-Radiative Effects

    International Nuclear Information System (INIS)

    Dezairi, Aouatif; Samir, Mhamed; Eddahby, Mohamed; Saifaoui, Dennoun; Katsonis, Konstantinos; Berenguer, Chloe

    2008-01-01

    The structure and the behavior of the sheath in Tokamak collisional plasmas has been studied. The sheath is modeled taking into account the presence of the dust 2 and the effects of the charged particle collisions and radiative processes. The latter may allow for optical diagnostics of the plasma.

  15. Extending the formal model of a spatial data infrastructure to include volunteered geographical information

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2011-07-01

    Full Text Available -to-date VGI, have led to the integration of VGI into some SDIs. Therefore it is necessary to rethink our formal model of an SDI to accommodate VGI. We started our rethinking process with the SDI stakeholders in an attempt to establish which changes...

  16. Loss and thermal model for power semiconductors including device rating information

    DEFF Research Database (Denmark)

    Ma, Ke; Bahman, Amir Sajjad; Beczkowski, Szymon

    2014-01-01

    pre-defined by experience with poor design flexibility. Consequently a more complete loss and thermal model is proposed in this paper, which takes into account not only the electrical loading but also the device rating as input variables. The quantified correlation between the power loss, thermal...

  17. Social Rationality as a Unified Model of Man (Including Bounded Rationality)

    NARCIS (Netherlands)

    Lindenberg, Siegwart

    2001-01-01

    In 1957, Simon published a collection of his essays under the title of “Models of Man: Social and Rational”. In the preface, he explains the choice for this title: All of the essays “are concerned with laying foundations for a science of man that will comfortably accommodate his dual nature as a

  18. Simple vibration modeling of structural fuzzy with continuous boundary by including two-dimensional spatial memory

    DEFF Research Database (Denmark)

    Friis, Lars; Ohlrich, Mogens

    2008-01-01

    is considered as one or more fuzzy substructures that are known in some statistical sense only. Experiments have shown that such fuzzy substructures often introduce a damping in the master which is much higher than the structural losses account for. A special method for modeling fuzzy substructures with a one...

  19. Situational effects of the school factors included in the dynamic model of educational effectiveness

    NARCIS (Netherlands)

    Creerners, Bert; Kyriakides, Leonidas

    We present results of a longitudinal study in which 50 schools, 113 classes and 2,542 Cypriot primary students participated. We tested the validity of the dynamic model of educational effectiveness and especially its assumption that the impact of school factors depends on the current situation of

  20. Modeling the elastic behavior of ductile cast iron including anisotropy in the graphite nodules

    DEFF Research Database (Denmark)

    Andriollo, Tito; Thorborg, Jesper; Hattel, Jesper Henri

    2016-01-01

    by means of a 3D periodic unit cell model. In this respect, an explicit procedure to enforce both periodic displacement and periodic traction boundary conditions in ABAQUS is presented, and the importance of fulfilling the traction continuity conditions at the unit cell boundaries is discussed. It is shown...

  1. The albino chick as a model for studying ocular developmental anomalies, including refractive errors, associated with albinism.

    Science.gov (United States)

    Rymer, Jodi; Choh, Vivian; Bharadwaj, Shrikant; Padmanabhan, Varuna; Modilevsky, Laura; Jovanovich, Elizabeth; Yeh, Brenda; Zhang, Zhan; Guan, Huanxian; Payne, W; Wildsoet, Christine F

    2007-10-01

    Albinism is associated with a variety of ocular anomalies including refractive errors. The purpose of this study was to investigate the ocular development of an albino chick line. The ocular development of both albino and normally pigmented chicks was monitored using retinoscopy to measure refractive errors and high frequency A-scan ultrasonography to measure axial ocular dimensions. Functional tests included an optokinetic nystagmus paradigm to assess visual acuity, and flash ERGs to assess retinal function. The underlying genetic abnormality was characterized using a gene microarray, PCR and a tyrosinase assay. The ultrastructure of the retinal pigment epithelium (RPE) was examined using transmission electron microscopy. PCR confirmed that the genetic abnormality in this line is a deletion in exon 1 of the tyrosinase gene. Tyrosinase gene expression in isolated RPE cells was minimally detectable, and there was minimal enzyme activity in albino feather bulbs. The albino chicks had pink eyes and their eyes transilluminated, reflecting the lack of melanin in all ocular tissues. All three main components, anterior chamber, crystalline lens and vitreous chamber, showed axial expansion over time in both normal and albino animals, but the anterior chambers of albino chicks were consistently shallower than those of normal chicks, while in contrast, their vitreous chambers were longer. Albino chicks remained relatively myopic, with higher astigmatism than the normally pigmented chicks, even though both groups underwent developmental emmetropization. Albino chicks had reduced visual acuity yet the ERG a- and b-wave components had larger amplitudes and shorter than normal implicit times. Developmental emmetropization occurs in the albino chick but is impaired, likely because of functional abnormalities in the RPE and/or retina as well as optical factors. In very young chicks the underlying genetic mutation may also contribute to refractive error and eye shape abnormalities.

  2. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    Science.gov (United States)

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model

  3. A scan statistic for continuous data based on the normal probability model

    Directory of Open Access Journals (Sweden)

    Huang Lan

    2009-10-01

    Full Text Available Abstract Temporal, spatial and space-time scan statistics are commonly used to detect and evaluate the statistical significance of temporal and/or geographical disease clusters, without any prior assumptions on the location, time period or size of those clusters. Scan statistics are mostly used for count data, such as disease incidence or mortality. Sometimes there is an interest in looking for clusters with respect to a continuous variable, such as lead levels in children or low birth weight. For such continuous data, we present a scan statistic where the likelihood is calculated using the the normal probability model. It may also be used for other distributions, while still maintaining the correct alpha level. In an application of the new method, we look for geographical clusters of low birth weight in New York City.

  4. An experimental randomized study of six different ventilatory modes in a piglet model with normal lungs

    DEFF Research Database (Denmark)

    Nielsen, J B; Sjöstrand, U H; Henneberg, S W

    1991-01-01

    A randomized study of 6 ventilatory modes was made in 7 piglets with normal lungs. Using a Servo HFV 970 (prototype system) and a Servo ventilator 900 C the ventilatory modes examined were as follows: SV-20V, i.e. volume-controlled intermittent positive-pressure ventilation (IPPV); SV-20VIosc, i...... ventilatory modes. Also the mean airway pressures were lower with the HFV modes 8-9 cm H2O compared to 11-14 cm H2O for the other modes. The gas distribution was evaluated by N2 wash-out and a modified lung clearance index. All modes showed N2 wash-out according to a two-compartment model. The SV-20P mode had...

  5. Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)

    Science.gov (United States)

    Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi

    2017-03-01

    The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.

  6. A catchment-scale groundwater model including sewer pipe leakage in an urban system

    Science.gov (United States)

    Peche, Aaron; Fuchs, Lothar; Spönemann, Peter; Graf, Thomas; Neuweiler, Insa

    2016-04-01

    Keywords: pipe leakage, urban hydrogeology, catchment scale, OpenGeoSys, HYSTEM-EXTRAN Wastewater leakage from subsurface sewer pipe defects leads to contamination of the surrounding soil and groundwater (Ellis, 2002; Wolf et al., 2004). Leakage rates at pipe defects have to be known in order to quantify contaminant input. Due to inaccessibility of subsurface pipe defects, direct (in-situ) measurements of leakage rates are tedious and associated with a high degree of uncertainty (Wolf, 2006). Proposed catchment-scale models simplify leakage rates by neglecting unsaturated zone flow or by reducing spatial dimensions (Karpf & Krebs, 2013, Boukhemacha et al., 2015). In the present study, we present a physically based 3-dimensional numerical model incorporating flow in the pipe network, in the saturated zone and in the unsaturated zone to quantify leakage rates on the catchment scale. The model consists of the pipe network flow model HYSTEM-EXTAN (itwh, 2002), which is coupled to the subsurface flow model OpenGeoSys (Kolditz et al., 2012). We also present the newly developed coupling scheme between the two flow models. Leakage functions specific to a pipe defect are derived from simulations of pipe leakage using spatially refined grids around pipe defects. In order to minimize computational effort, these leakage functions are built into the presented numerical model using unrefined grids around pipe defects. The resulting coupled model is capable of efficiently simulating spatially distributed pipe leakage coupled with subsurficial water flow in a 3-dimensional environment. References: Boukhemacha, M. A., Gogu, C. R., Serpescu, I., Gaitanaru, D., & Bica, I. (2015). A hydrogeological conceptual approach to study urban groundwater flow in Bucharest city, Romania. Hydrogeology Journal, 23(3), 437-450. doi:10.1007/s10040-014-1220-3. Ellis, J. B., & Revitt, D. M. (2002). Sewer losses and interactions with groundwater quality. Water Science and Technology, 45(3), 195

  7. Including Antenna Models in Microwave Imaging for Breast-Cancer Screening

    DEFF Research Database (Denmark)

    Rubæk, Tonny; Meincke, Peter

    2006-01-01

    Microwave imaging is emerging as a tool for screening for breast cancer, but the lack of methods for including the characteristics of the antennas of the imaging systems in the imaging algorithms limits their performance. In this paper, a method for incorporating the full antenna characteristics...

  8. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  9. A Skill Score of Trajectory Model Evaluation Using Reinitialized Series of Normalized Cumulative Lagrangian Separation

    Science.gov (United States)

    Liu, Y.; Weisberg, R. H.

    2017-12-01

    The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as a continental shelf and its adjacent deep ocean. A skill score is proposed based on the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. The new metrics correctly indicates the relative performance of the Global HYCOM in simulating the strong currents of the Gulf of Mexico Loop Current and the weaker currents of the West Florida Shelf in the eastern Gulf of Mexico. In contrast, the Lagrangian separation distance alone gives a misleading result. Also, the observed drifter position series can be used to reinitialize the trajectory model and evaluate its performance along the observed trajectory, not just at the drifter end position. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian-based probability density function may be estimated.

  10. Numerical models of single- and double-negative metamaterials including viscous and thermal losses

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Sánchez-Dehesa, José

    2017-01-01

    Negative index acoustic metamaterials are artificial structures made of subwavelength units arranged in a lattice, whose effective acoustic parameters, bulk modulus and mass density, can be negative. In these materials, sound waves propagate inside the periodic structure, assumed rigid, showing...... extraordinary properties. We are interested in two particular cases: a double-negative metamaterial, where both parameters are negative at some frequencies, and a single-negative metamaterial with negative bulk modulus within a broader frequency band. In previous research involving the double-negative...... detailed understanding on how viscous and thermal losses affect the setups at different frequencies. The modeling of a simpler single-negative metamaterial also broadens this overview. Both setups have been modeled with quadratic BEM meshes. Each sample, scaled at two different sizes, has been represented...

  11. Paint Pavement Marking Performance Prediction Model That Includes the Impacts of Snow Removal Operations

    Science.gov (United States)

    2011-03-01

    Hypothesized that snow plows wear down mountain road pavement markings. 2007 Craig et al. -Edge lines degrade slower than center/skip lines 2007...retroreflectivity to create the models. They discovered that paint pavement markings last 80% longer on Portland Cement Concrete than Asphalt Concrete at low AADT...retroreflectivity, while yellow markings lost 21%. Lu and Barter attributed the sizable degradation to snow removal, sand application, and studded

  12. An earth outgoing longwave radiation climate model. II - Radiation with clouds included

    Science.gov (United States)

    Yang, Shi-Keng; Smith, G. Louis; Bartman, Fred L.

    1988-01-01

    The model of the outgoing longwave radiation (OLWR) of Yang et al. (1987) is modified by accounting for the presence of clouds and their influence on OLWR. Cloud top temperature was adjusted so that the calculation agreed with NOAA scanning radiometer measurements. Cloudy sky cases were calculated for global average, zonal average, and worldwide distributed cases. The results were found to agree well with satellite observations.

  13. Modelling topical photodynamic therapy treatment including the continuous production of Protoporphyrin IX

    Science.gov (United States)

    Campbell, C. L.; Brown, C. T. A.; Wood, K.; Moseley, H.

    2016-11-01

    Most existing theoretical models of photodynamic therapy (PDT) assume a uniform initial distribution of the photosensitive molecule, Protoporphyrin IX (PpIX). This is an adequate assumption when the prodrug is systematically administered; however for topical PDT this is no longer a valid assumption. Topical application and subsequent diffusion of the prodrug results in an inhomogeneous distribution of PpIX, especially after short incubation times, prior to light illumination. In this work a theoretical simulation of PDT where the PpIX distribution depends on the incubation time and the treatment modality is described. Three steps of the PpIX production are considered. The first is the distribution of the topically applied prodrug, the second in the conversion from the prodrug to PpIX and the third is the light distribution which affects the PpIX distribution through photobleaching. The light distribution is modelled using a Monte Carlo radiation transfer model and indicates treatment depths of around 2 mm during daylight PDT and approximately 3 mm during conventional PDT. The results suggest that treatment depths are not only limited by the light penetration but also by the PpIX distribution.

  14. Situational effects of the school factors included in the dynamic model of educational effectiveness

    Directory of Open Access Journals (Sweden)

    Bert Creemers

    2009-08-01

    Full Text Available We present results of a longitudinal study in which 50 schools, 113 classes and 2,542 Cypriot primary students participated. We tested the validity of the dynamic model of educational effectiveness and especially its assumption that the impact of school factors depends on the current situation of the school and on the type of problems/difficulties the school is facing. Reference is made to the methods used to test this assumption of the dynamic model by measuring school effectiveness in mathematics, Greek language, and religious education over two consecutive school years. The main findings are as follows. School factors were found to have situational effects. Specifically, the development of a school policy for teaching and the school evaluation of policy for teaching were found to have stronger effects in schools where the quality of teaching at classroom level was low. Moreover, time stability in the effectiveness status of schools was identified and thereby changes in the functioning of schools were found not to have a significant impact on changes in the effectiveness status of schools. Implications of the findings for the development of the dynamic model and suggestions for further research are presented.

  15. A model of synovial fluid lubricant composition in normal and injured joints

    Directory of Open Access Journals (Sweden)

    M E Blewis

    2007-03-01

    Full Text Available The synovial fluid (SF of joints normally functions as a biological lubricant, providing low-friction and low-wear properties to articulating cartilage surfaces through the putative contributions of proteoglycan 4 (PRG4, hyaluronic acid (HA, and surface active phospholipids (SAPL. These lubricants are secreted by chondrocytes in articular cartilage and synoviocytes in synovium, and concentrated in the synovial space by the semi-permeable synovial lining. A deficiency in this lubricating system may contribute to the erosion of articulating cartilage surfaces in conditions of arthritis. A quantitative intercompartmental model was developed to predict in vivo SF lubricant concentration in the human knee joint. The model consists of a SF compartment that (a is lined by cells of appropriate types, (b is bound by a semi-permeable membrane, and (c contains factors that regulate lubricant secretion. Lubricant concentration was predicted with different chemical regulators of chondrocyte and synoviocyte secretion, and also with therapeutic interventions of joint lavage and HA injection. The model predicted steady-state lubricant concentrations that were within physiologically observed ranges, and which were markedly altered with chemical regulation. The model also predicted that when starting from a zero lubricant concentration after joint lavage, PRG4 reaches steady-state concentration ~10-40 times faster than HA. Additionally, analysis of the clearance rate of HA after therapeutic injection into SF predicted that the majority of HA leaves the joint after ~1-2 days. This quantitative intercompartmental model allows integration of biophysical processes to identify both environmental factors and clinical therapies that affect SF lubricant composition in whole joints.

  16. Comparison of lead isotopes with source apportionment models, including SOM, for air particulates

    International Nuclear Information System (INIS)

    Gulson, Brian; Korsch, Michael; Dickson, Bruce; Cohen, David; Mizon, Karen; Michael Davis, J.

    2007-01-01

    We have measured high precision lead isotopes in PM 2.5 particulates from a highly-trafficked site (Mascot) and rural site (Richmond) in the Sydney Basin, New South Wales, Australia to compare with isotopic data from total suspended particulates (TSP) from other sites in the Sydney Basin and evaluate relationships with source fingerprints obtained from multi-element PM 2.5 data. The isotopic data for the period 1998 to 2004 show seasonal peaks and troughs that are more pronounced in the rural site for the PM 2.5 .samples but are consistent with the TSP. The Self Organising Map (SOM) method has been applied to the multi-element PM 2.5 data to evaluate its use in obtaining fingerprints for comparison with standard statistical procedures (ANSTO model). As seasonal effects are also significant for the multi-element data, the SOM modelling is reported as site and season dependent. At the Mascot site, the ANSTO model exhibits decreasing 206 Pb/ 204 Pb ratios with increasing contributions of fingerprints for 'secondary smoke' (industry), 'soil', 'smoke' and 'seaspray'. Similar patterns were shown by SOM winter fingerprints for both sites. At the rural site, there are large isotopic variations but for the majority of samples these are not associated with increased contributions from the main sources with the ANSTO model. For two winter sampling times, there are increased contributions from 'secondary industry', 'smoke', 'soil' and seaspray with one time having a source or sources of Pb similar to that of Mascot. The only positive relationship between increasing 206 Pb/ 204 Pb ratio and source contributions is found at the rural site using the SOM summer fingerprints, both of which show a significant contribution from sulphur. Several of the fingerprints using either model have significant contributions from black carbon (BC) and/or sulphur (S) that probably derive from diesel fuels and industrial sources. Increased contributions from sources with the SOM summer

  17. Hypothyroidism after primary radiotherapy for head and neck squamous cell carcinoma: Normal tissue complication probability modeling with latent time correction

    DEFF Research Database (Denmark)

    Rønjom, Marianne Feen; Brink, Carsten; Bentzen, Søren

    2013-01-01

    To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors.......To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors....

  18. A note on the Fisher information matrix for the skew-generalized-normal model

    OpenAIRE

    Arellano-Valle, Reinaldo B.

    2013-01-01

    In this paper, the exact form of the Fisher information matrix for the skew-generalized normal (SGN) distribution is determined. The existence of singularity problems of this matrix for the skewnormal and normal particular cases is investigated. Special attention is given to the asymptotic properties of the MLEs under the skew-normality hypothesis.

  19. Normal and Fibrotic Rat Livers Demonstrate Shear Strain Softening and Compression Stiffening: A Model for Soft Tissue Mechanics.

    Directory of Open Access Journals (Sweden)

    Maryna Perepelyuk

    Full Text Available Tissues including liver stiffen and acquire more extracellular matrix with fibrosis. The relationship between matrix content and stiffness, however, is non-linear, and stiffness is only one component of tissue mechanics. The mechanical response of tissues such as liver to physiological stresses is not well described, and models of tissue mechanics are limited. To better understand the mechanics of the normal and fibrotic rat liver, we carried out a series of studies using parallel plate rheometry, measuring the response to compressive, extensional, and shear strains. We found that the shear storage and loss moduli G' and G" and the apparent Young's moduli measured by uniaxial strain orthogonal to the shear direction increased markedly with both progressive fibrosis and increasing compression, that livers shear strain softened, and that significant increases in shear modulus with compressional stress occurred within a range consistent with increased sinusoidal pressures in liver disease. Proteoglycan content and integrin-matrix interactions were significant determinants of liver mechanics, particularly in compression. We propose a new non-linear constitutive model of the liver. A key feature of this model is that, while it assumes overall liver incompressibility, it takes into account water flow and solid phase compressibility. In sum, we report a detailed study of non-linear liver mechanics under physiological strains in the normal state, early fibrosis, and late fibrosis. We propose a constitutive model that captures compression stiffening, tension softening, and shear softening, and can be understood in terms of the cellular and matrix components of the liver.

  20. Extending Galactic Habitable Zone Modeling to Include the Emergence of Intelligent Life.

    Science.gov (United States)

    Morrison, Ian S; Gowanlock, Michael G

    2015-08-01

    Previous studies of the galactic habitable zone have been concerned with identifying those regions of the Galaxy that may favor the emergence of complex life. A planet is deemed habitable if it meets a set of assumed criteria for supporting the emergence of such complex life. In this work, we extend the assessment of habitability to consider the potential for life to further evolve to the point of intelligence--termed the propensity for the emergence of intelligent life, φI. We assume φI is strongly influenced by the time durations available for evolutionary processes to proceed undisturbed by the sterilizing effects of nearby supernovae. The times between supernova events provide windows of opportunity for the evolution of intelligence. We developed a model that allows us to analyze these window times to generate a metric for φI, and we examine here the spatial and temporal variation of this metric. Even under the assumption that long time durations are required between sterilizations to allow for the emergence of intelligence, our model suggests that the inner Galaxy provides the greatest number of opportunities for intelligence to arise. This is due to the substantially higher number density of habitable planets in this region, which outweighs the effects of a higher supernova rate in the region. Our model also shows that φI is increasing with time. Intelligent life emerged at approximately the present time at Earth's galactocentric radius, but a similar level of evolutionary opportunity was available in the inner Galaxy more than 2 Gyr ago. Our findings suggest that the inner Galaxy should logically be a prime target region for searches for extraterrestrial intelligence and that any civilizations that may have emerged there are potentially much older than our own.

  1. porewater chemistry experiment at Mont Terri rock laboratory. Reactive transport modelling including bacterial activity

    International Nuclear Information System (INIS)

    Tournassat, Christophe; Gaucher, Eric C.; Leupin, Olivier X.; Wersin, Paul

    2010-01-01

    Document available in extended abstract form only. An in-situ test in the Opalinus Clay formation, termed pore water Chemistry (PC) experiment, was run for a period of five years. It was based on the concept of diffusive equilibration whereby traced water with a composition close to that expected in the formation was continuously circulated and monitored in a packed off borehole. The main original focus was to obtain reliable data on the pH/pCO 2 of the pore water, but because of unexpected microbially- induced redox reactions, the objective was then changed to elucidate the biogeochemical processes happening in the borehole and to understand their impact on pH/pCO 2 and pH in the low permeability clay formation. The biologically perturbed chemical evolution of the PC experiment was simulated with reactive transport models. The aim of this modelling exercise was to develop a 'minimal-' model able to reproduce the chemical evolution of the PC experiment, i.e. the chemical evolution of solute inorganic and organic compounds (organic carbon, dissolved inorganic carbon etc...) that are coupled with each other through the simultaneous occurrence of biological transformation of solute or solid compounds, in-diffusion and out-diffusion of solute species and precipitation/dissolution of minerals (in the borehole and in the formation). An accurate description of the initial chemical conditions in the surrounding formation together with simplified kinetics rule mimicking the different phases of bacterial activities allowed reproducing the evolution of all main measured parameters (e.g. pH, TOC). Analyses from the overcoring and these simulations evidence the high buffer capacity of Opalinus clay regarding chemical perturbations due to bacterial activity. This pH buffering capacity is mainly attributed to the carbonate system as well as to the clay surfaces reactivity. Glycerol leaching from the pH-electrode might be the primary organic source responsible for

  2. Coupled modeling of land hydrology–regional climate including human carbon emission and water exploitation

    Directory of Open Access Journals (Sweden)

    Zheng-Hui Xie

    2017-06-01

    Full Text Available Carbon emissions and water use are two major kinds of human activities. To reveal whether these two activities can modify the hydrological cycle and climate system in China, we conducted two sets of numerical experiments using regional climate model RegCM4. In the first experiment used to study the climatic responses to human carbon emissions, the model were configured over entire China because the impacts of carbon emissions can be detected across the whole country. Results from the first experiment revealed that near-surface air temperature may significantly increase from 2007 to 2059 at a rate exceeding 0.1 °C per decade in most areas across the country; southwestern and southeastern China also showed increasing trends in summer precipitation, with rates exceeding 10 mm per decade over the same period. In summer, only northern China showed an increasing trend of evapotranspiration, with increase rates ranging from 1 to 5 mm per decade; in winter, increase rates ranging from 1 to 5 mm per decade were observed in most regions. These effects are believed to be caused by global warming from human carbon emissions. In the second experiment used to study the effects of human water use, the model were configured over a limited region—Haihe River Basin in the northern China, because compared with the human carbon emissions, the effects of human water use are much more local and regional, and the Haihe River Basin is the most typical region in China that suffers from both intensive human groundwater exploitation and surface water diversion. We incorporated a scheme of human water regulation into RegCM4 and conducted the second experiment. Model outputs showed that the groundwater table severely declined by ∼10 m in 1971–2000 through human groundwater over-exploitation in the basin; in fact, current conditions are so extreme that even reducing the pumping rate by half cannot eliminate the groundwater depletion cones observed in the area

  3. Able but unintelligent: including positively stereotyped black subgroups in the stereotype content model.

    Science.gov (United States)

    Walzer, Amy S; Czopp, Alexander M

    2011-01-01

    The stereotype content model (SCM) posits that warmth and competence are the key components underlying judgments about social groups. Because competence can encompass different components (e.g., intelligence, talent) different group members may be perceived to be competent for different reasons. Therefore, we believe it may be important to specify the type of competence being assessed when examining perceptions of groups that are positively stereotyped (i.e., Black athletes and musical Blacks). Consistent with the SCM, these subgroups were perceived as high in competence-talent but not in competence-intelligence and low in warmth. Both the intelligence and talent frame of competence fit in the SCM's social structural hypothesis.

  4. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  5. Description and Application of A Model of Seepage under A Weir Including Mechanical Clogging

    Directory of Open Access Journals (Sweden)

    Sroka Zbigniew

    2014-07-01

    Full Text Available The paper discusses seepage flow under a damming structure (a weir in view of mechanical clogging in a thin layer at the upstream site. It was assumed that in this layer flow may be treated as one-dimensional (perpendicular to the layer, while elsewhere flow was modelled as two-dimensional. The solution in both zones was obtained in the discrete form using the finite element method and the Euler method. The effect of the clogging layer on seepage flow was modelled using the third kind boundary condition. Seepage parameters in the clogging layer were estimated based on laboratory tests conducted by Skolasińska [2006]. Typical problem was taken to provide simulation and indicate how clogging affects the seepage rate and other parameters of the flow. Results showed that clogging at the upstream site has a significant effect on the distribution of seepage velocity and hydraulic gradients. The flow underneath the structure decreases with time, but these changes are relatively slow.

  6. Modeling radiation dosimetry to predict cognitive outcomes in pediatric patients with CNS embryonal tumors including medulloblastoma

    International Nuclear Information System (INIS)

    Merchant, Thomas E.; Kiehna, Erin N.; Li Chenghong; Shukla, Hemant; Sengupta, Saikat; Xiong Xiaoping; Gajjar, Amar; Mulhern, Raymond K.

    2006-01-01

    Purpose: Model the effects of radiation dosimetry on IQ among pediatric patients with central nervous system (CNS) tumors. Methods and Materials: Pediatric patients with CNS embryonal tumors (n = 39) were prospectively evaluated with serial cognitive testing, before and after treatment with postoperative, risk-adapted craniospinal irradiation (CSI) and conformal primary-site irradiation, followed by chemotherapy. Differential dose-volume data for 5 brain volumes (total brain, supratentorial brain, infratentorial brain, and left and right temporal lobes) were correlated with IQ after surgery and at follow-up by use of linear regression. Results: When the dose distribution was partitioned into 2 levels, both had a significantly negative effect on longitudinal IQ across all 5 brain volumes. When the dose distribution was partitioned into 3 levels (low, medium, and high), exposure to the supratentorial brain appeared to have the most significant impact. For most models, each Gy of exposure had a similar effect on IQ decline, regardless of dose level. Conclusions: Our results suggest that radiation dosimetry data from 5 brain volumes can be used to predict decline in longitudinal IQ. Despite measures to reduce radiation dose and treatment volume, the volume that receives the highest dose continues to have the greatest effect, which supports current volume-reduction efforts

  7. An Improved Heat Budget Estimation Including Bottom Effects for General Ocean Circulation Models

    Science.gov (United States)

    Carder, Kendall; Warrior, Hari; Otis, Daniel; Chen, R. F.

    2001-01-01

    This paper studies the effects of the underwater light field on heat-budget calculations of general ocean circulation models for shallow waters. The presence of a bottom significantly alters the estimated heat budget in shallow waters, which affects the corresponding thermal stratification and hence modifies the circulation. Based on the data collected during the COBOP field experiment near the Bahamas, we have used a one-dimensional turbulence closure model to show the influence of the bottom reflection and absorption on the sea surface temperature field. The water depth has an almost one-to-one correlation with the temperature rise. Effects of varying the bottom albedo by replacing the sea grass bed with a coral sand bottom, also has an appreciable effect on the heat budget of the shallow regions. We believe that the differences in the heat budget for the shallow areas will have an influence on the local circulation processes and especially on the evaporative and long-wave heat losses for these areas. The ultimate effects on humidity and cloudiness of the region are expected to be significant as well.

  8. On the modelling of semi-insulating GaAs including surface tension and bulk stresses

    Energy Technology Data Exchange (ETDEWEB)

    Dreyer, W.; Duderstadt, F.

    2004-07-01

    Necessary heat treatment of single crystal semi-insulating Gallium Arsenide (GaAs), which is deployed in micro- and opto- electronic devices, generate undesirable liquid precipitates in the solid phase. The appearance of precipitates is influenced by surface tension at the liquid/solid interface and deviatoric stresses in the solid. The central quantity for the description of the various aspects of phase transitions is the chemical potential, which can be additively decomposed into a chemical and a mechanical part. In particular the calculation of the mechanical part of the chemical potential is of crucial importance. We determine the chemical potential in the framework of the St. Venant-Kirchhoff law which gives an appropriate stress/strain relation for many solids in the small strain regime. We establish criteria, which allow the correct replacement of the St. Venant-Kirchhoff law by the simpler Hooke law. The main objectives of this study are: (i) We develop a thermo-mechanical model that describes diffusion and interface motion, which both are strongly influenced by surface tension effects and deviatoric stresses. (ii) We give an overview and outlook on problems that can be posed and solved within the framework of the model. (iii) We calculate non-standard phase diagrams, i.e. those that take into account surface tension and non-deviatoric stresses, for GaAs above 786 C, and we compare the results with classical phase diagrams without these phenomena. (orig.)

  9. Effect of neurosteroids on a model lipid bilayer including cholesterol: An Atomic Force Microscopy study.

    Science.gov (United States)

    Sacchi, Mattia; Balleza, Daniel; Vena, Giulia; Puia, Giulia; Facci, Paolo; Alessandrini, Andrea

    2015-05-01

    Amphiphilic molecules which have a biological effect on specific membrane proteins, could also affect lipid bilayer properties possibly resulting in a modulation of the overall membrane behavior. In light of this consideration, it is important to study the possible effects of amphiphilic molecule of pharmacological interest on model systems which recapitulate some of the main properties of the biological plasma membranes. In this work we studied the effect of a neurosteroid, Allopregnanolone (3α,5α-tetrahydroprogesterone or Allo), on a model bilayer composed by the ternary lipid mixture DOPC/bSM/chol. We chose ternary mixtures which present, at room temperature, a phase coexistence of liquid ordered (Lo) and liquid disordered (Ld) domains and which reside near to a critical point. We found that Allo, which is able to strongly partition in the lipid bilayer, induces a marked increase in the bilayer area and modifies the relative proportion of the two phases favoring the Ld phase. We also found that the neurosteroid shifts the miscibility temperature to higher values in a way similarly to what happens when the cholesterol concentration is decreased. Interestingly, an isoform of Allo, isoAllopregnanolone (3β,5α-tetrahydroprogesterone or isoAllo), known to inhibit the effects of Allo on GABAA receptors, has an opposite effect on the bilayer properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Including temperature in a wavefunction description of the dynamics of the quantum Rabi model

    Science.gov (United States)

    Werther, Michael; Grossmann, Frank

    2018-01-01

    We present a wavefunction methodology to account for finite temperature initial conditions in the quantum Rabi model. The approach is based on the Davydov Ansatz together with a statistical sampling of the canonical harmonic oscillator initial density matrix. Equations of motion are gained from a variational principle and numerical results are compared to those of the thermal Hamiltonian approach. For a system consisting of a single spin and a single oscillator and for moderate coupling strength, we compare our new results with full quantum ones as well as with other Davydov-type results based on alternative sampling/summation strategies. All of these perform better than the ones based on the thermal Hamiltonian approach. The best agreement is shown by a Boltzmann weighting of individual eigenstate propagations. Extending this to a bath of many oscillators will, however, be very demanding numerically. The use of any one of the investigated stochastic sampling approaches will then be favorable.

  11. Dynamic Analysis of Thick Plates Including Deep Beams on Elastic Foundations Using Modified Vlasov Model

    Directory of Open Access Journals (Sweden)

    Korhan Ozgan

    2013-01-01

    Full Text Available Dynamic analysis of foundation plate-beam systems with transverse shear deformation is presented using modified Vlasov foundation model. Finite element formulation of the problem is derived by using an 8-node (PBQ8 finite element based on Mindlin plate theory for the plate and a 2-node Hughes element based on Timoshenko beam theory for the beam. Selective reduced integration technique is used to avoid shear locking problem for the evaluation of the stiffness matrices for both the elements. The effect of beam thickness, the aspect ratio of the plate and subsoil depth on the response of plate-beam-soil system is analyzed. Numerical examples show that the displacement, bending moments and shear forces are changed significantly by adding the beams.

  12. Robust and Adaptive OMR System Including Fuzzy Modeling, Fusion of Musical Rules, and Possible Error Detection

    Directory of Open Access Journals (Sweden)

    Bloch Isabelle

    2007-01-01

    Full Text Available This paper describes a system for optical music recognition (OMR in case of monophonic typeset scores. After clarifying the difficulties specific to this domain, we propose appropriate solutions at both image analysis level and high-level interpretation. Thus, a recognition and segmentation method is designed, that allows dealing with common printing defects and numerous symbol interconnections. Then, musical rules are modeled and integrated, in order to make a consistent decision. This high-level interpretation step relies on the fuzzy sets and possibility framework, since it allows dealing with symbol variability, flexibility, and imprecision of music rules, and merging all these heterogeneous pieces of information. Other innovative features are the indication of potential errors and the possibility of applying learning procedures, in order to gain in robustness. Experiments conducted on a large data base show that the proposed method constitutes an interesting contribution to OMR.

  13. Energy-based fatigue model for shape memory alloys including thermomechanical coupling

    Science.gov (United States)

    Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong

    2016-03-01

    This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well.

  14. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    Science.gov (United States)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  15. ECO: a generic eutrophication model including comprehensive sediment-water interaction.

    Science.gov (United States)

    Smits, Johannes G C; van Beek, Jan K L

    2013-01-01

    The content and calibration of the comprehensive generic 3D eutrophication model ECO for water and sediment quality is presented. Based on a computational grid for water and sediment, ECO is used as a tool for water quality management to simulate concentrations and mass fluxes of nutrients (N, P, Si), phytoplankton species, detrital organic matter, electron acceptors and related substances. ECO combines integral simulation of water and sediment quality with sediment diagenesis and closed mass balances. Its advanced process formulations for substances in the water column and the bed sediment were developed to allow for a much more dynamic calculation of the sediment-water exchange fluxes of nutrients as resulting from steep concentration gradients across the sediment-water interface than is possible with other eutrophication models. ECO is to more accurately calculate the accumulation of organic matter and nutrients in the sediment, and to allow for more accurate prediction of phytoplankton biomass and water quality in response to mitigative measures such as nutrient load reduction. ECO was calibrated for shallow Lake Veluwe (The Netherlands). Due to restoration measures this lake underwent a transition from hypertrophic conditions to moderately eutrophic conditions, leading to the extensive colonization by submerged macrophytes. ECO reproduces observed water quality well for the transition period of ten years. The values of its process coefficients are in line with ranges derived from literature. ECO's calculation results underline the importance of redox processes and phosphate speciation for the nutrient return fluxes. Among other things, the results suggest that authigenic formation of a stable apatite-like mineral in the sediment can contribute significantly to oligotrophication of a lake after a phosphorus load reduction.

  16. A phantom model demonstration of tomotherapy dose painting delivery, including managed respiratory motion without motion management

    Energy Technology Data Exchange (ETDEWEB)

    Kissick, Michael W; Mo Xiaohu; McCall, Keisha C; Mackie, Thomas R [Department of Medical Physics, Wisconsin Institutes for Medical Research, 111 Highland Avenue, University of Wisconsin-Madison, Madison, WI 53705 (United States); Schubert, Leah K [Radiation Oncology Department, University of Nebraska Medical Center, Omaha, NE 68198 (United States); Westerly, David C, E-mail: mwkissick@wisc.ed [Department of Radiation Oncology, University of Colorado Denver, Aurora, CO 80045 (United States)

    2010-05-21

    The aim of the study was to demonstrate a potential alternative scenario for accurate dose-painting (non-homogeneous planned dose) delivery at 1 cm beam width with helical tomotherapy (HT) in the presence of 1 cm, three-dimensional, intra-fraction respiratory motion, but without any active motion management. A model dose-painting experiment was planned and delivered to the average position (proper phase of a 4DCT scan) with three spherical PTV levels to approximate dose painting to compensate for hypothetical hypoxia in a model lung tumor. Realistic but regular motion was produced with the Washington University 4D Motion Phantom. A small spherical Virtual Water(TM) phantom was used to simulate a moving lung tumor inside of the LUNGMAN(TM) anthropomorphic chest phantom to simulate realistic heterogeneity uncertainties. A piece of 4 cm Gafchromic EBT(TM) film was inserted into the 6 cm diameter sphere. TomoTherapy, Inc., DQA(TM) software was used to verify the delivery performed on a TomoTherapy Hi-Art II(TM) device. The dose uncertainty in the purposeful absence of motion management and in the absence of large, low frequency drifts (periods greater than the beam width divided by the couch velocity) or randomness in the breathing displacement yields very favorable results. Instead of interference effects, only small blurring is observed because of the averaging of many breathing cycles and beamlets and the avoidance of interference. Dose painting during respiration with helical tomotherapy is feasible in certain situations without motion management. A simple recommendation is to make respiration as regular as possible without low frequency drifting. The blurring is just small enough to suggest that it may be acceptable to deliver without motion management if the motion is equal to the beam width or smaller (at respiration frequencies) when registered to the average position.

  17. A phantom model demonstration of tomotherapy dose painting delivery, including managed respiratory motion without motion management

    International Nuclear Information System (INIS)

    Kissick, Michael W; Mo Xiaohu; McCall, Keisha C; Mackie, Thomas R; Schubert, Leah K; Westerly, David C

    2010-01-01

    The aim of the study was to demonstrate a potential alternative scenario for accurate dose-painting (non-homogeneous planned dose) delivery at 1 cm beam width with helical tomotherapy (HT) in the presence of 1 cm, three-dimensional, intra-fraction respiratory motion, but without any active motion management. A model dose-painting experiment was planned and delivered to the average position (proper phase of a 4DCT scan) with three spherical PTV levels to approximate dose painting to compensate for hypothetical hypoxia in a model lung tumor. Realistic but regular motion was produced with the Washington University 4D Motion Phantom. A small spherical Virtual Water(TM) phantom was used to simulate a moving lung tumor inside of the LUNGMAN(TM) anthropomorphic chest phantom to simulate realistic heterogeneity uncertainties. A piece of 4 cm Gafchromic EBT(TM) film was inserted into the 6 cm diameter sphere. TomoTherapy, Inc., DQA(TM) software was used to verify the delivery performed on a TomoTherapy Hi-Art II(TM) device. The dose uncertainty in the purposeful absence of motion management and in the absence of large, low frequency drifts (periods greater than the beam width divided by the couch velocity) or randomness in the breathing displacement yields very favorable results. Instead of interference effects, only small blurring is observed because of the averaging of many breathing cycles and beamlets and the avoidance of interference. Dose painting during respiration with helical tomotherapy is feasible in certain situations without motion management. A simple recommendation is to make respiration as regular as possible without low frequency drifting. The blurring is just small enough to suggest that it may be acceptable to deliver without motion management if the motion is equal to the beam width or smaller (at respiration frequencies) when registered to the average position.

  18. Association Between the Visceral Adiposity Index and Homeostatic Model Assessment of Insulin Resistance in Participants With Normal Waist Circumference.

    Science.gov (United States)

    Ji, Baolan; Qu, Hua; Wang, Hang; Wei, Huili; Deng, Huacong

    2017-09-01

    We assessed the correlation between the visceral adiposity index (VAI; a useful indicator of adipose distribution and function) and homeostatic model assessment of insulin resistance (HOMA-IR) in participants with normal waist circumference. A cross-sectional study was conducted, which included 1834 Chinese adults. The blood pressure, anthropometric measurements, fasting and postprandial blood glucose, fasting insulin, and lipid profiles were measured. The VAI and HOMA-IR were calculated. Participants were divided into 4 groups according to the HOMA-IR level, and the correlation between the VAI and HOMA-IR was analyzed. The VAI gradually increased across the HOMA-IR quartiles ( P HOMA-IR ( P HOMA-IR. A logistic regression analysis indicated that VAI elevation was the main risk factor for the increased HOMA-IR in both genders. Overall, the VAI was closely correlated with the HOMA-IR in a population without central obesity.

  19. Quantitative modeling of electron spectroscopy intensities for supported nanoparticles: The hemispherical cap model for non-normal detection

    Science.gov (United States)

    Sharp, James C.; Campbell, Charles T.

    2015-02-01

    Nanoparticles of one element or compound dispersed across the surface of another substrate element or compound form the basis for many materials of great technological importance, such as heterogeneous catalysts, fuel cells and other electrocatalysts, photocatalysts, chemical sensors and biomaterials. They also form during film growth by deposition in many fabrication processes. The average size and number density of such nanoparticles are often very important, and these can be estimated with electron microscopy or scanning tunneling microscopy. However, this is very time consuming and often unavailable with sufficient resolution when the particle size is ~ 1 nm. Because the probe depth of electron spectroscopies like X-Ray Photoelectron Spectroscopy (XPS) or Auger Electron Spectroscopy (AES) is ~ 1 nm, these provide quantitative information on both the total amount of adsorbed material when it is in the form of such small nanoparticles, and the particle thickness. For electron spectroscopy conducted with electron detection normal to the surface, Diebold et al. (1993) derived analytical relationships between the signal intensities for the adsorbate and substrate and the particles' average size and number density, under the assumption that all the particles have hemispherical shape and the same radius. In this paper, we report a simple angle- and particle-size-dependent correction factor that can be applied to these analytical expressions so that they can also be extended to measurements made at other detection angles away from the surface normal. This correction factor is computed using numerical integration and presented for use in future modeling. This correction factor is large (> 2) for angles beyond 60°, so comparing model predictions to measurements at both 0° and ≥ 60° will also provide a new means for testing the model's assumptions (hemispherical shape and fixed size particles). The ability to compare the hemispherical cap model at several angles

  20. Shiba states and zero-bias anomalies in the hybrid normal-superconductor Anderson model

    Science.gov (United States)

    Žitko, Rok; Lim, Jong Soo; López, Rosa; Aguado, Ramón

    2015-01-01

    Hybrid semiconductor-superconductor systems are interesting melting pots where various fundamental effects in condensed-matter physics coexist. For example, when a quantum dot is coupled to a superconducting electrode two very distinct phenomena, superconductivity and the Kondo effect, compete. As a result of this competition, the system undergoes a quantum phase transition when the superconducting gap Δ is of the order of the Kondo temperature TK. The underlying physics behind such transition ultimately relies on the physics of the Anderson model where the standard metallic host is replaced by a superconducting one, namely the physics of a (quantum) magnetic impurity in a superconductor. A characteristic feature of this hybrid system is the emergence of subgap bound states, the so-called Yu-Shiba-Rusinov (YSR) states, which cross zero energy across the quantum phase transition, signaling a switching of the fermion parity and spin (doublet or singlet) of the ground state. Interestingly, similar hybrid devices based on semiconducting nanowires with spin-orbit coupling may host exotic zero-energy bound states with Majorana character. Both parity crossings and Majorana bound states (MBSs) are experimentally marked by zero-bias anomalies in transport, which are detected by coupling the hybrid device with an extra normal contact. We here demonstrate theoretically that this extra contact, usually considered as a nonperturbing tunneling weak probe, leads to nontrivial effects. This conclusion is supported by numerical renormalization-group calculations of the phase diagram of an Anderson impurity coupled to both superconducting and normal-state leads. We obtain this phase diagram for an arbitrary ratio Δ/TK, which allows us to analyze relevant experimental scenarios, such as parity crossings as well as Kondo features induced by the normal lead, as this ratio changes. Spectral functions at finite temperatures and magnetic fields, which can be directly linked to

  1. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  2. The effects of pravastatin on the normal human placenta: Lessons from ex-vivo models.

    Directory of Open Access Journals (Sweden)

    Adelina Balan

    Full Text Available Research in animal models and preliminary clinical studies in humans support the use of pravastatin for the prevention of preeclampsia. However, its use during pregnancy is still controversial due to limited data about its effect on the human placenta and fetus.In the present study, human placental cotyledons were perfused in the absence or presence of pravastatin in the maternal reservoir (PraM. In addition, placental explants were treated with pravastatin for 5, 24 and 72 h under normoxia and hypoxia. We monitored the secretion of placental growth factor (PlGF, soluble fms-like tyrosine kinase-1 (sFlt-1, soluble endoglin (sEng, endothelial nitric oxide synthase (eNOS expression and activation and the fetal vasoconstriction response to angiotensin-II.The concentrations of PlGF, sFlt-1 and sEng were not significantly altered by pravastatin in PraM cotyledons and in placental explants compared to control. Under hypoxic conditions, pravastatin decreased sFlt-1 concentrations. eNOS expression was significantly increased in PraM cotyledons but not in pravastatin-treated placental explants cultured under normoxia or hypoxia. eNOS phosphorylation was not significantly affected by pravastatin. The feto-placental vascular tone and the fetal vasoconstriction response to angiotensin-II, did not change following exposure of the maternal circulation to pravastatin.We found that pravastatin does not alter the essential physiological functions of the placenta investigated in the study. The relevance of the study lays in the fact that it expands the current knowledge obtained thus far regarding the effect of the drug on the normal human placenta. This data is reassuring and important for clinicians that consider the treatment of high-risk patients with pravastatin, a treatment that exposes some normal pregnancies to the drug.

  3. Identification of a developmental gene expression signature, including HOX genes, for the normal human colonic crypt stem cell niche: overexpression of the signature parallels stem cell overpopulation during colon tumorigenesis.

    Science.gov (United States)

    Bhatlekar, Seema; Addya, Sankar; Salunek, Moreh; Orr, Christopher R; Surrey, Saul; McKenzie, Steven; Fields, Jeremy Z; Boman, Bruce M

    2014-01-15

    Our goal was to identify a unique gene expression signature for human colonic stem cells (SCs). Accordingly, we determined the gene expression pattern for a known SC-enriched region--the crypt bottom. Colonic crypts and isolated crypt subsections (top, middle, and bottom) were purified from fresh, normal, human, surgical specimens. We then used an innovative strategy that used two-color microarrays (∼18,500 genes) to compare gene expression in the crypt bottom with expression in the other crypt subsections (middle or top). Array results were validated by PCR and immunostaining. About 25% of genes analyzed were expressed in crypts: 88 preferentially in the bottom, 68 in the middle, and 131 in the top. Among genes upregulated in the bottom, ∼30% were classified as growth and/or developmental genes including several in the PI3 kinase pathway, a six-transmembrane protein STAMP1, and two homeobox (HOXA4, HOXD10) genes. qPCR and immunostaining validated that HOXA4 and HOXD10 are selectively expressed in the normal crypt bottom and are overexpressed in colon carcinomas (CRCs). Immunostaining showed that HOXA4 and HOXD10 are co-expressed with the SC markers CD166 and ALDH1 in cells at the normal crypt bottom, and the number of these co-expressing cells is increased in CRCs. Thus, our findings show that these two HOX genes are selectively expressed in colonic SCs and that HOX overexpression in CRCs parallels the SC overpopulation that occurs during CRC development. Our study suggests that developmental genes play key roles in the maintenance of normal SCs and crypt renewal, and contribute to the SC overpopulation that drives colon tumorigenesis.

  4. Improvement in genetic evaluation of female fertility in dairy cattle using multiple-trait models including milk production traits

    DEFF Research Database (Denmark)

    Sun, C; Madsen, P; Lund, M S

    2010-01-01

    This study investigated the improvement in genetic evaluation of fertility traits by using production traits as secondary traits (MILK = 305-d milk yield, FAT = 305-d fat yield, and PROT = 305-d protein yield). Data including 471,742 records from first lactations of Denmark Holstein cows, covering...... (DATAC1, which only contained the first crop daughters) for proven bulls. In addition, the superiority of the models was evaluated by expected reliability of EBV, calculated from the prediction error variance of EBV. Based on these criteria, the models combining milk production traits showed better model...... stability and predictive ability than single-trait models for all the fertility traits, except for nonreturn rate within 56 d after first service. The stability and predictive ability for the model including MILK or PROT were similar to the model including all 3 milk production traits and better than...

  5. Regional modelling of future African climate north of 15S including greenhouse warming and land degradation

    Energy Technology Data Exchange (ETDEWEB)

    Paeth, H. [Geographical Institute, University of Wuerzburg, Am Hubland, 97074 Wuerzburg (Germany); Thamm, H.P. [Geographical Institute, University of Bonn, Bonn (Germany)

    2007-08-15

    Previous studies have highlighted the crucial role of land degradation in tropical African climate. This effect urgently has to be taken into account when predicting future African climate under enhanced greenhouse conditions. Here, we present time slice experiments of African climate until 2025, using a high-resolution regional climate model. A supposable scenario of future land use changes, involving vegetation loss and soil degradation, is prescribed simultaneously with increasing greenhouse-gas concentrations in order to detect, where the different forcings counterbalance or reinforce each other. This proceeding allows us to define the regions of highest vulnerability with respect to future freshwater availability and food security in tropical and subtropical Africa and may provide a decision basis for political measures. The model simulates a considerable reduction in precipitation amount until 2025 over most of tropical Africa, amounting to partly more than 500 mm (20-40% of the annual sum), particularly in the Congo Basin and the Sahel Zone. The change is strongest in boreal summer and basically reflects the pattern of maximum vegetation cover during the seasonal cycle. The related change in the surface energy fluxes induces a substantial near-surface warming by up to 7C. According to the modified temperature gradients over tropical Africa, the summer monsoon circulation intensifies and transports more humid air masses into the southern part of West Africa. This humidifying effect is overcompensated by a remarkable decrease in surface evaporation, leading to the overall drying tendency over most of Africa. Extreme daily rainfall events become stronger in autumn but less intense in spring. Summer and autumn appear to be characterized by more severe heat waves over Subsaharan West Africa. In addition, the Tropical Easterly Jet is weakening, leading to enhanced drought conditions in the Sahel Zone. All these results suggest that the local impact of land

  6. Weakly coupled map lattice models for multicellular patterning and collective normalization of abnormal single-cell states

    Science.gov (United States)

    García-Morales, Vladimir; Manzanares, José A.; Mafe, Salvador

    2017-04-01

    We present a weakly coupled map lattice model for patterning that explores the effects exerted by weakening the local dynamic rules on model biological and artificial networks composed of two-state building blocks (cells). To this end, we use two cellular automata models based on (i) a smooth majority rule (model I) and (ii) a set of rules similar to those of Conway's Game of Life (model II). The normal and abnormal cell states evolve according to local rules that are modulated by a parameter κ . This parameter quantifies the effective weakening of the prescribed rules due to the limited coupling of each cell to its neighborhood and can be experimentally controlled by appropriate external agents. The emergent spatiotemporal maps of single-cell states should be of significance for positional information processes as well as for intercellular communication in tumorigenesis, where the collective normalization of abnormal single-cell states by a predominantly normal neighborhood may be crucial.

  7. Rat tibial osteotomy model providing a range of normal to impaired healing.

    Science.gov (United States)

    Miles, Joan D; Weinhold, Paul; Brimmo, Olubusola; Dahners, Laurence

    2011-01-01

    The purpose of this study was to develop an inexpensive and easily implemented rat tibial osteotomy model capable of producing a range of healing outcomes. A saw blade was used to create a transverse osteotomy of the tibia in 89 Sprague-Dawley rats. A 0.89 mm diameter stainless steel wire was then inserted as an intramedullary nail to stabilize the fracture. To impair healing, 1, 2, or 3 mm cylindrical polyetheretherketone (PEEK) spacer beads were threaded onto the wires, between the bone ends. Fracture healing was evaluated radiographically, biomechanically, and histologically at 5 weeks. Means were compared for statistical differences by one-way ANOVA and Holm-Sidak multiple comparison testing. The mean number of "cortices bridged" for the no spacer group was 3.4 (SD ± 0.8), which was significantly greater than in the 1 mm (2.3 ± 1.4), 2 mm (0.8 ± 0.7), and 3 mm (0.3 ± 0.4) groups (p < 0.003). Biomechanical results correlated with radiographic findings, with an ultimate torque of 172 ± 53, 137 ± 41, 90 ± 38, and 24 ± 23 N/mm with a 0, 1, 2, or 3 mm defect, respectively. In conclusion, we have demonstrated that this inexpensive, technically straightforward model can be used to create a range of outcomes from normal healing to impaired healing, to nonunions. This model may be useful for testing new therapeutic strategies to promote fracture healing, materials thought to be able to heal critical-sized defects, or evaluating agents suspected of impairing healing. Copyright © 2010 Orthopaedic Research Society.

  8. Wave propagation simulation in normal and infarcted myocardium: computational and modelling issues.

    Science.gov (United States)

    Maglaveras, N; Van Capelle, F J; De Bakker, J M

    1998-01-01

    Simulation of propagating action potentials (PAP) in normal and abnormal myocardium is used for the understanding of mechanisms responsible for eliciting dangerous arrhythmias. One- and two-dimensional models dealing with PAP properties are reviewed in this paper viewed both from the computational and mathematical aspects. These models are used for linking theoretical and experimental results. The discontinuous nature of the PAP is demonstrated through the combination of experimental and theoretically derived results. In particular it can be shown that for increased intracellular coupling resistance the PAP upstroke phase properties (Vmax, dV/dtmax and tau foot) change considerably, and in some cases non-monotonically with increased coupling resistance. It is shown that tau foot) is a parameter that is very sensitive to the cell's distance to the stimulus site, the stimulus strength and the coupling resistance. In particular it can be shown that in a one-dimensional structure the tau foot value can increase dramatically for lower coupling resistance values near the stimulus site and subsequently can be reduced as we move to distances larger than five resting length constants from the stimulus site. The tau foot variability is reduced with increased coupling resistance, rendering the lower coupling resistance structures, under abnormal excitation sequences, more vulnerable to conduction block and arrhythmias. Using the theory of discontinuous propagation of the PAP in the myocardium it is demonstrated that for specific abnormal situations in the myocardium, such as infarcted tissue, one- and two-dimensional models can reliably simulate propagation characteristics and explain complex phenomena such as propagation at bifurcation sites and mechanisms of block and re-entry. In conclusion it is shown that applied mathematics and informatics can help in elucidating electrophysiologically complex mechanisms such as arrhythmias and conduction disturbances in the myocardium.

  9. Clarifying the use of aggregated exposures in multilevel models: self-included vs. self-excluded measures.

    Directory of Open Access Journals (Sweden)

    Etsuji Suzuki

    Full Text Available Multilevel analyses are ideally suited to assess the effects of ecological (higher level and individual (lower level exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure. More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure. In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models.Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models-self-included model and self-excluded model-and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure.Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions.When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model-self-included or self-excluded-is suitable for a given situation

  10. Clarifying the use of aggregated exposures in multilevel models: self-included vs. self-excluded measures.

    Science.gov (United States)

    Suzuki, Etsuji; Yamamoto, Eiji; Takao, Soshi; Kawachi, Ichiro; Subramanian, S V

    2012-01-01

    Multilevel analyses are ideally suited to assess the effects of ecological (higher level) and individual (lower level) exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure). More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure). In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models. Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models-self-included model and self-excluded model-and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure. Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions. When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model-self-included or self-excluded-is suitable for a given situation, particularly

  11. Circuit modeling of the electrical impedance: II. Normal subjects and system reproducibility

    International Nuclear Information System (INIS)

    Shiffman, C A; Rutkove, S B

    2013-01-01

    Part I of this series showed that the five-element circuit model accurately mimics impedances measured using multi-frequency electrical impedance myography (MFEIM), focusing on changes brought on by disease. This paper addresses two requirements which must be met if the method is to qualify for clinical use. First, the extracted parameters must be reproducible over long time periods such as those involved in the treatment of muscular disease, and second, differences amongst normal subjects should be attributable to known differences in the properties of healthy muscle. It applies the method to five muscle groups in 62 healthy subjects, closely following the procedure used earlier for the diseased subjects. Test–retest comparisons show that parameters are reproducible at levels from 6 to 16% (depending on the parameter) over time spans of up to 267 days, levels far below the changes occurring in serious disease. Also, variations with age, gender and muscle location are found to be consistent with established expectations for healthy muscle tissue. We conclude that the combination of MFEIM measurements and five-element circuit analysis genuinely reflects properties of muscle and is reliable enough to recommend its use in following neuromuscular disease. (paper)

  12. Amygdala Hyperactivity in MAM Model of Schizophrenia is Normalized by Peripubertal Diazepam Administration.

    Science.gov (United States)

    Du, Yijuan; Grace, Anthony A

    2016-09-01

    In addition to prefrontal cortex (PFC) and hippocampus, amygdala may have a role in the pathophysiology of schizophrenia, given its pivotal role in emotion and extensive connectivity with the PFC and hippocampus. Moreover, abnormal activities of amygdala may be related to the anxiety observed in schizophrenia patients and at-risk adolescents. These at-risk subjects demonstrated heightened levels of anxiety, which are correlated with the onset of psychosis later in life. Similarly, rats that received methyl azoxymethanol acetate (MAM) gestationally exhibited higher levels of anxiety peripubertally. In the current study, the heightened anxiety was also observed in adult MAM animals, as well as higher firing rates of BLA neurons in both peripubertal and adult MAM rats. In addition, the power of BLA theta oscillations of adult MAM rats showed a larger increase in response to conditioned stimuli (CS). We showed previously that administration of the antianxiety drug diazepam during the peripubertal period prevents the hyperdopaminergic state in adult MAM rats. In this study, we found that peripubertal diazepam treatment reduced heightened anxiety, decreased BLA neuron firing rates and attenuated the CS-induced increase in BLA theta power in adult MAM rats, supporting a persistent normalization by this treatment. This study provides a link between BLA hyperactivity and anxiety in schizophrenia model rats and that circumvention of stress may prevent the emergence of pathology in the adult.

  13. 3D Normal Human Neural Progenitor Tissue-Like Assemblies: A Model of Persistent VZV Infection

    Science.gov (United States)

    Goodwin, Thomas J.

    2013-01-01

    Varicella-zoster virus (VZV) is a neurotropic human alphaherpesvirus that causes varicella upon primary infection, establishes latency in multiple ganglionic neurons, and can reactivate to cause zoster. Live attenuated VZV vaccines are available; however, they can also establish latent infections and reactivate. Studies of VZV latency have been limited to the analyses of human ganglia removed at autopsy, as the virus is strictly a human pathogen. Recently, terminally differentiated human neurons have received much attention as a means to study the interaction between VZV and human neurons; however, the short life-span of these cells in culture has limited their application. Herein, we describe the construction of a model of normal human neural progenitor cells (NHNP) in tissue-like assemblies (TLAs), which can be successfully maintained for at least 180 days in three-dimensional (3D) culture, and exhibit an expression profile similar to that of human trigeminal ganglia. Infection of NHNP TLAs with cell-free VZV resulted in a persistent infection that was maintained for three months, during which the virus genome remained stable. Immediate-early, early and late VZV genes were transcribed, and low-levels of infectious VZV were recurrently detected in the culture supernatant. Our data suggest that NHNP TLAs are an effective system to investigate long-term interactions of VZV with complex assemblies of human neuronal cells.

  14. Combining prior knowledge with data-driven modeling of a batch distillation column including start-up

    NARCIS (Netherlands)

    van Lith, P.F.; van Lith, Pascal F.; Betlem, Bernardus H.L.; Roffel, B.

    2003-01-01

    This paper presents the development of a simple model which describes the product quality and production over time of an experimental batch distillation column, including start-up. The model structure is based on a simple physical framework, which is augmented with fuzzy logic. This provides a way

  15. Application of the Oral Minimal Model to Korean Subjects with Normal Glucose Tolerance and Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Min Hyuk Lim

    2016-06-01

    Full Text Available BackgroundThe oral minimal model is a simple, useful tool for the assessment of β-cell function and insulin sensitivity across the spectrum of glucose tolerance, including normal glucose tolerance (NGT, prediabetes, and type 2 diabetes mellitus (T2DM in humans.MethodsPlasma glucose, insulin, and C-peptide levels were measured during a 180-minute, 75-g oral glucose tolerance test in 24 Korean subjects with NGT (n=10 and T2DM (n=14. The parameters in the computational model were estimated, and the indexes for insulin sensitivity and β-cell function were compared between the NGT and T2DM groups.ResultsThe insulin sensitivity index was lower in the T2DM group than the NGT group. The basal index of β-cell responsivity, basal hepatic insulin extraction ratio, and post-glucose challenge hepatic insulin extraction ratio were not different between the NGT and T2DM groups. The dynamic, static, and total β-cell responsivity indexes were significantly lower in the T2DM group than the NGT group. The dynamic, static, and total disposition indexes were also significantly lower in the T2DM group than the NGT group.ConclusionThe oral minimal model can be reproducibly applied to evaluate β-cell function and insulin sensitivity in Koreans.

  16. [Metacarpophalangeal prosthesis modeling. Comparison of stress factors in a normal and a prosthetic metacarpophalangeal joint using finite element analysis].

    Science.gov (United States)

    Gayet, L E; Texereau, J; Dumas, S

    2000-07-01

    Prosthetic replacement of the metacarpophalangeal joints of long fingers is a problematical technique for the surgeon. The aim of the present study was to examine and compare, by means of finite element analysis, stress distribution in a normal metacarpophalangeal joint and to compare this with the findings in a similar joint with a prosthesis in order to better determine the risk of aseptic loosening, and also to examine possible solutions to limit these risks. Finite element modelling was carried out using Abaqus software. Various criteria were taken into account including anatomical data, stress distribution, mechanical characteristics of the materials used, and different positions of the phalanx. A comparison of the results showed two significant stress distribution factors, i.e., a reduction of normal stress in the cortical bone of the finger fitted with a prosthesis; and the appearance of a flexion moment which completely modified the stress distribution throughout the metacarpal and therefore also in the opposite phalanx. To reduce the risk of aseptic loosening, two solutions were proposed: a) to reduce Young's module. The problem which arises, as in the case of total hip prosthesis, is that of finding a material with a Young's module which is closer to that of cortical bone, and which at the same time has a high elastic limit and breakage point and good biocompatibility; b) to reduce the inertia of the prosthesis, which seems the more likely of the two propositions, as it is based on the results of the modelling. The inertia of the prosthesis on stress distribution can be reduced by modifying two parameters, namely by producing a hollow section and shortening the structure of the prosthesis.

  17. Standardized echocardiographic assessment of cardiac function in normal adult zebrafish and heart disease models

    Directory of Open Access Journals (Sweden)

    Louis W. Wang

    2017-01-01

    Full Text Available The zebrafish (Danio rerio is an increasingly popular model organism in cardiovascular research. Major insights into cardiac developmental processes have been gained by studies of embryonic zebrafish. However, the utility of zebrafish for modeling adult-onset heart disease has been limited by a lack of robust methods for in vivo evaluation of cardiac function. We established a physiological protocol for underwater zebrafish echocardiography using high frequency ultrasound, and evaluated its reliability in detecting altered cardiac function in two disease models. Serial assessment of cardiac function was performed in wild-type zebrafish aged 3 to 12 months and the effects of anesthetic agents, age, sex and background strain were evaluated. There was a varying extent of bradycardia and ventricular contractile impairment with different anesthetic drugs and doses, with tricaine 0.75 mmol l−1 having a relatively more favorable profile. When compared with males, female fish were larger and had more measurement variability. Although age-related increments in ventricular chamber size were greater in females than males, there were no sex differences when data were normalized to body size. Systolic ventricular function was similar in both sexes at all time points, but differences in diastolic function were evident from 6 months onwards. Wild-type fish of both sexes showed a reliance on atrial contraction for ventricular diastolic filling. Echocardiographic evaluation of adult zebrafish with diphtheria toxin-induced myocarditis or anemia-induced volume overload accurately identified ventricular dilation and altered contraction, with suites of B-mode, ventricular strain, pulsed-wave Doppler and tissue Doppler indices showing concordant changes indicative of myocardial hypocontractility or hypercontractility, respectively. Repeatability, intra-observer and inter-observer correlations for echocardiographic measurements were high. We demonstrate that

  18. Modeling the Cell Muscle Membrane from Normal and Desmin- or Dystrophin-null Mice as an Elastic System

    Science.gov (United States)

    García-Pelagio, Karla P.; Santamaría-Holek, Ivan; Bloch, Robert J.; Ortega, Alicia; González-Serratos, Hugo

    2010-12-01

    Two of the most important proteins linking the contractile apparatus and costameres at the sarcolemma of skeletal muscle fibers are dystrophin and desmin. We have developed an elastic model of the proteins that link the sarcolemma to the myofibrils. This is a distributed model, with an elastic constant, k, that includes the main protein components of the costameres. The distributed spring model is composed of parallel units attached in series. To test the model, we performed experiments in which we applied negative pressure, generated by an elastimeter, to a small area of the sarcolemma from single myofiber. The negative pressure formed a bleb of variable height, dependent on the pressure applied. We normalized our measurements of k in dystrophin-null (mdx) and desmin-null (des-/-) mice to the value we obtained for wild type (WT) mice, which was set at 1.0. The relative experimental value for the stiffness of myofibers from mice lacking dystrophin or desmin was 0.5 and 0.7, respectively. The theoretical k values of the individual elements were obtained using neural networks (NN), in which the input was the k value for each parallel spring component and the output was the solution of each resulting parallel system. We compare the experimental values of k in control and mutant muscles to the theoretical values obtained by NN for each protein. Computed theoretical values were 0.4 and 0.8 for dystrophin- and desmin-null muscles, respectively, and 0.9 for WT, in reasonable agreement with our experimental results. This suggests that, although it is a simplified spring model solved by NN, it provides a good approximation of the distribution of spring elements and the elastic constants of the proteins that form the costameres. Our results show that dystrophin is the protein that contributes more than any other to the strength of the connections between the sarcolemma and the contractile apparatus, the costameres.

  19. Can a one-layer optical skin model including melanin and inhomogeneously distributed blood explain spatially resolved diffuse reflectance spectra?

    Science.gov (United States)

    Karlsson, Hanna; Pettersson, Anders; Larsson, Marcus; Strömberg, Tomas

    2011-02-01

    Model based analysis of calibrated diffuse reflectance spectroscopy can be used for determining oxygenation and concentration of skin chromophores. This study aimed at assessing the effect of including melanin in addition to hemoglobin (Hb) as chromophores and compensating for inhomogeneously distributed blood (vessel packaging), in a single-layer skin model. Spectra from four humans were collected during different provocations using a twochannel fiber optic probe with source-detector separations 0.4 and 1.2 mm. Absolute calibrated spectra using data from either a single distance or both distances were analyzed using inverse Monte Carlo for light transport and Levenberg-Marquardt for non-linear fitting. The model fitting was excellent using a single distance. However, the estimated model failed to explain spectra from the other distance. The two-distance model did not fit the data well at either distance. Model fitting was significantly improved including melanin and vessel packaging. The most prominent effect when fitting data from the larger separation compared to the smaller separation was a different light scattering decay with wavelength, while the tissue fraction of Hb and saturation were similar. For modeling spectra at both distances, we propose using either a multi-layer skin model or a more advanced model for the scattering phase function.

  20. Development of a CFD Model Including Tree's Drag Parameterizations: Application to Pedestrian's Wind Comfort in an Urban Area

    Science.gov (United States)

    Kang, G.; Kim, J.

    2017-12-01

    This study investigated the tree's effect on wind comfort at pedestrian height in an urban area using a computational fluid dynamics (CFD) model. We implemented the tree's drag parameterization scheme to the CFD model and validated the simulated results against the wind-tunnel measurement data as well as LES data via several statistical methods. The CFD model underestimated (overestimated) the concentrations on the leeward (windward) walls inside the street canyon in the presence of trees, because the CFD model can't resolve the latticed cage and can't reflect the concentration increase and decrease caused by the latticed cage in the simulations. However, the scalar pollutants' dispersion simulated by the CFD model was quite similar to that in the wind-tunnel measurement in pattern and magnitude, on the whole. The CFD model overall satisfied the statistical validation indices (root normalized mean square error, geometric mean variance, correlation coefficient, and FAC2) but failed to satisfy the fractional bias and geometric mean bias due to the underestimation on the leeward wall and overestimation on the windward wall, showing that its performance was comparable to the LES's performance. We applied the CFD model to evaluation of the trees' effect on the pedestrian's wind-comfort in an urban area. To investigate sensory levels for human activities, the wind-comfort criteria based on Beaufort wind-force scales (BWSs) were used. In the tree-free scenario, BWS 4 and 5 (unpleasant condition for sitting long and sitting short, respectively) appeared in the narrow spaces between buildings, in the upwind side of buildings, and the unobstructed areas. In the tree scenario, BWSs decreased by 1 3 grade inside the campus of Pukyong National University located in the target area, which indicated that trees planted in the campus effectively improved pedestrian's wind comfort.

  1. Towards an Explanation of Overeating Patterns Among Normal Weight College Women: Development and Validation of a Structural Equation Model

    OpenAIRE

    Russ, Christine Runyan II

    1998-01-01

    Although research describing relationships between psychosocial factors and various eating patterns is growing, a model which explains the mechanisms through which these factors may operate is lacking. A model to explain overeating patterns among normal weight college females was developed and tested. The model contained the following variables: global adjustment, eating and weight cognitions, emotional eating, and self-efficacy. Three hundred ninety-o...

  2. Estimating net ecosystem exchange of carbon using the normalized difference vegetation index and an ecosystem model

    International Nuclear Information System (INIS)

    Veroustraete, F.; Patyn, J.; Myneni, R.B.

    1996-01-01

    The evaluation and prediction of changes in carbon dynamics at the ecosystem level is a key issue in studies of global change. An operational concept for the determination of carbon fluxes for the Belgian territory is the goal of the presented study. The approach is based on the integration of remotely sensed data into ecosystem models in order to evaluate photosynthetic assimilation and net ecosystem exchange (NEE). Remote sensing can be developed as an operational tool to determine the fraction of absorbed photosynthetically active radiation (feAR). A review of the methodological approach of mapping fPAR dynamics at the regional scale by means of NOAA11-A VHRR / 2 data for the year 1990 is given. The processing sequence from raw radiance values to fPAR is presented. An interesting aspect of incorporating remote sensing derived fPAR in ecosystem models is the potential for modeling actual as opposed to potential vegetation. Further work should prove whether the concepts presented and the assumptions made in this study are valid. (NEE). Complex ecosystem models with a highly predictive value for a specific ecosystem are generally not suitable for global or regional applications, since they require a substantial set of ancillary data becoming increasingly larger with increasing complexity of the model. The ideal model for our purpose is one that is simple enough to be used in global scale modeling, and which can be adapted for different ecosystems or vegetation types. The fraction of absorbed photosynthetically active radiation (fPAR) during the growing season determines in part net photosynthesis and phytomass production (Ruimy, 1995). Remotely measured red and near-infrared spectral reflectances can be used to estimate fPAR. Therefore, a possible approach is to estimate net photosynthesis, phytomass, and NEE from a combination of satellite data and an ecosystem model that includes carbon dynamics. It has to be stated that some parts of the work presented in this

  3. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    Science.gov (United States)

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  4. Nonlinear flow model of multiple fractured horizontal wells with stimulated reservoir volume including the quadratic gradient term

    Science.gov (United States)

    Ren, Junjie; Guo, Ping

    2017-11-01

    The real fluid flow in porous media is consistent with the mass conservation which can be described by the nonlinear governing equation including the quadratic gradient term (QGT). However, most of the flow models have been established by ignoring the QGT and little work has been conducted to incorporate the QGT into the flow model of the multiple fractured horizontal (MFH) well with stimulated reservoir volume (SRV). This paper first establishes a semi-analytical model of an MFH well with SRV including the QGT. Introducing the transformed pressure and flow-rate function, the nonlinear model of a point source in a composite system including the QGT is linearized. Then the Laplace transform, principle of superposition, numerical discrete method, Gaussian elimination method and Stehfest numerical inversion are employed to establish and solve the seepage model of the MFH well with SRV. Type curves are plotted and the effects of relevant parameters are analyzed. It is found that the nonlinear effect caused by the QGT can increase the flow capacity of fluid flow and influence the transient pressure positively. The relevant parameters not only have an effect on the type curve but also affect the error in the pressure calculated by the conventional linear model. The proposed model, which is consistent with the mass conservation, reflects the nonlinear process of the real fluid flow, and thus it can be used to obtain more accurate transient pressure of an MFH well with SRV.

  5. Currents, HF Radio-derived, SF Bay Outlet, Normal Model, Meridional, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the meridional component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal...

  6. Integrating normal and pathological personality: relating the DSM-5 trait-dimensional model to general traits of personality.

    Science.gov (United States)

    Watson, David; Stasik, Sara M; Ro, Eunyoe; Clark, Lee Anna

    2013-06-01

    The Personality Inventory for DSM-5 (PID-5) assesses traits relevant for diagnosing personality disorder in the Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5). We examined the PID-5 in relation to the Big-Three and Big-Five personality traits in outpatient and community adult samples. Domain-level analyses revealed that PID-5 Negative Affectivity correlated strongly with Neuroticism, and PID-5 Antagonism and Disinhibition correlated strongly negatively with Agreeableness and Conscientiousness, respectively; Antagonism and Disinhibition also were both linked strongly to Big-Three trait Disinhibition. PID-5 Detachment related strongly to personality, including Extraversion/Positive Temperament, but did not show its expected specificity to this factor. Finally, PID-5 Psychoticism correlated only modestly with Openness. Facet-level analyses indicated that some PID-5 scales demonstrated replicable deviations from their DSM-5 model placements. We discuss implications of these data for the DSM-5 model of personality disorder, and for integrating it with well-established structures of normal personality.

  7. Global stability for infectious disease models that include immigration of infected individuals and delay in the incidence

    Directory of Open Access Journals (Sweden)

    Chelsea Uggenti

    2018-03-01

    Full Text Available We begin with a detailed study of a delayed SI model of disease transmission with immigration into both classes. The incidence function allows for a nonlinear dependence on the infected population, including mass action and saturating incidence as special cases. Due to the immigration of infectives, there is no disease-free equilibrium and hence no basic reproduction number. We show there is a unique endemic equilibrium and that this equilibrium is globally asymptotically stable for all parameter values. The results include vector-style delay and latency-style delay. Next, we show that previous global stability results for an SEI model and an SVI model that include immigration of infectives and non-linear incidence but not delay can be extended to systems with vector-style delay and latency-style delay.

  8. The effect of inertia, viscous damping, temperature and normal stress on chaotic behaviour of the rate and state friction model

    Science.gov (United States)

    Sinha, Nitish; Singh, Arun K.; Singh, Trilok N.

    2018-04-01

    A fundamental understanding of frictional sliding at rock surfaces is of practical importance for nucleation and propagation of earthquakes and rock slope stability. We investigate numerically the effect of different physical parameters such as inertia, viscous damping, temperature and normal stress on the chaotic behaviour of the two state variables rate and state friction (2sRSF) model. In general, a slight variation in any of inertia, viscous damping, temperature and effective normal stress reduces the chaotic behaviour of the sliding system. However, the present study has shown the appearance of chaos for the specific values of normal stress before it disappears again as the normal stress varies further. It is also observed that magnitude of system stiffness at which chaotic motion occurs, is less than the corresponding value of critical stiffness determined by using the linear stability analysis. These results explain the practical observation why chaotic nucleation of an earthquake is a rare phenomenon as reported in literature.

  9. Modeling the Nonlinear, Strain Rate Dependent Deformation of Woven Ceramic Matrix Composites With Hydrostatic Stress Effects Included

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.

    2004-01-01

    An analysis method based on a deformation (as opposed to damage) approach has been developed to model the strain rate dependent, nonlinear deformation of woven ceramic matrix composites with a plain weave fiber architecture. In the developed model, the differences in the tension and compression response have also been considered. State variable based viscoplastic equations originally developed for metals have been modified to analyze the ceramic matrix composites. To account for the tension/compression asymmetry in the material, the effective stress and effective inelastic strain definitions have been modified. The equations have also been modified to account for the fact that in an orthotropic composite the in-plane shear stiffness is independent of the stiffness in the normal directions. The developed equations have been implemented into a commercially available transient dynamic finite element code, LS-DYNA, through the use of user defined subroutines (UMATs). The tensile, compressive, and shear deformation of a representative plain weave woven ceramic matrix composite are computed and compared to experimental results. The computed values correlate well to the experimental data, demonstrating the ability of the model to accurately compute the deformation response of woven ceramic matrix composites.

  10. Modeling the Nonlinear, Strain Rate Dependent Deformation of Shuttle Leading Edge Materials with Hydrostatic Stress Effects Included

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.

    2004-01-01

    An analysis method based on a deformation (as opposed to damage) approach has been developed to model the strain rate dependent, nonlinear deformation of woven ceramic matrix composites, such as the Reinforced Carbon Carbon (RCC) material used on the leading edges of the Space Shuttle. In the developed model, the differences in the tension and compression deformation behaviors have also been accounted for. State variable viscoplastic equations originally developed for metals have been modified to analyze the ceramic matrix composites. To account for the tension/compression asymmetry in the material, the effective stress and effective inelastic strain definitions have been modified. The equations have also been modified to account for the fact that in an orthotropic composite the in-plane shear response is independent of the stiffness in the normal directions. The developed equations have been implemented into LS-DYNA through the use of user defined subroutines (UMATs). Several sample qualitative calculations have been conducted, which demonstrate the ability of the model to qualitatively capture the features of the deformation response present in woven ceramic matrix composites.

  11. Surface potential based modeling of charge, current, and capacitances in DGTFET including mobile channel charge and ambipolar behaviour

    Science.gov (United States)

    Jain, Prateek; Yadav, Chandan; Agarwal, Amit; Chauhan, Yogesh Singh

    2017-08-01

    We present a surface potential based analytical model for double gate tunnel field effect transistor (DGTFET) for the current, terminal charges, and terminal capacitances. The model accounts for the effect of the mobile charge in the channel and captures the device physics in depletion as well as in the strong inversion regime. The narrowing of the tunnel barrier in the presence of mobile charges in the channel is incorporated via modeling of the inverse decay length, which is constant under channel depletion condition and bias dependent under inversion condition. To capture the ambipolar current behavior in the model, tunneling at the drain junction is also included. The proposed model is validated against TCAD simulation data and it shows close match with the simulation data.

  12. Model Selection and Evaluation Based on Emerging Infectious Disease Data Sets including A/H1N1 and Ebola

    Directory of Open Access Journals (Sweden)

    Wendi Liu

    2015-01-01

    Full Text Available The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537, 1.2101 (95% CI (1.2084, 1.2119, 3.0234 (95% CI (2.6063, 3.4881, and 1.9018 (95% CI (1.8565, 1.9478, the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958, 3916 (95% CI (3865, 3967, 9886 (95% CI (9740, 10031, and 12633 (95% CI (12515, 12750 for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic.

  13. A randomised double-blind placebo-controlled pilot trial of a combined extract of sage, rosemary and melissa, traditional herbal medicines, on the enhancement of memory in normal healthy subjects, including influence of age.

    Science.gov (United States)

    Perry, N S L; Menzies, R; Hodgson, F; Wedgewood, P; Howes, M-J R; Brooker, H J; Wesnes, K A; Perry, E K

    2018-01-15

    To evaluate for the first time the effects of a combination of sage, rosemary and melissa (Salvia officinalis L., Rosmarinus officinalis L. and Melissa officinalis L.; SRM), traditional European medicines, on verbal recall in normal healthy subjects. To devise a suitable study design for assessing the clinical efficacy of traditional herbal medicines for memory and brain function. Forty-four normal healthy subjects (mean age 61 ± 9.26y SD; m/f 6/38) participated in this study. A double-blind, randomised, placebo-controlled pilot study was performed with subjects randomised into an active and placebo group. The study consisted of a single 2-week term ethanol extract of SRM that was chemically-characterised using high resolution LC-UV-MS/MS analysis. Immediate and delayed word recall were used to assess memory after taking SRM or placebo (ethanol extract of Myrrhis odorata (L.) Scop.). In addition analysis was performed with subjects divided into younger and older subgroups (≤ 62 years mean age n = 26: SRM n = 10, Placebo n = 16; ≥ 63 years n = 19: SRM n = 13, Placebo n = 6). Overall there were no significant differences between treatment and placebo change from baseline for immediate or delayed word recall. However subgroup analysis showed significant improvements to delayed word recall in the under 63 year age group (p memory in healthy subjects under 63 years of age. Short- and long- term supplementation with SRM extract merits more robust investigation as an adjunctive treatment for patients with Alzheimer's disease and in the general ageing population. The study design proved a simple cost effective trial protocol to test the efficacy of herbal medicines on verbal episodic memory, with future studies including broader cognitive assessment. Copyright © 2017 Elsevier GmbH. All rights reserved.

  14. Mathematical multi-scale model of the cardiovascular system including mitral valve dynamics. Application to ischemic mitral insufficiency

    Science.gov (United States)

    2011-01-01

    Background Valve dysfunction is a common cardiovascular pathology. Despite significant clinical research, there is little formal study of how valve dysfunction affects overall circulatory dynamics. Validated models would offer the ability to better understand these dynamics and thus optimize diagnosis, as well as surgical and other interventions. Methods A cardiovascular and circulatory system (CVS) model has already been validated in silico, and in several animal model studies. It accounts for valve dynamics using Heaviside functions to simulate a physiologically accurate "open on pressure, close on flow" law. However, it does not consider real-time valve opening dynamics and therefore does not fully capture valve dysfunction, particularly where the dysfunction involves partial closure. This research describes an updated version of this previous closed-loop CVS model that includes the progressive opening of the mitral valve, and is defined over the full cardiac cycle. Results Simulations of the cardiovascular system with healthy mitral valve are performed, and, the global hemodynamic behaviour is studied compared with previously validated results. The error between resulting pressure-volume (PV) loops of already validated CVS model and the new CVS model that includes the progressive opening of the mitral valve is assessed and remains within typical measurement error and variability. Simulations of ischemic mitral insufficiency are also performed. Pressure-Volume loops, transmitral flow evolution and mitral valve aperture area evolution follow reported measurements in shape, amplitude and trends. Conclusions The resulting cardiovascular system model including mitral valve dynamics provides a foundation for clinical validation and the study of valvular dysfunction in vivo. The overall models and results could readily be generalised to other cardiac valves. PMID:21942971

  15. Mathematical multi-scale model of the cardiovascular system including mitral valve dynamics. Application to ischemic mitral insufficiency

    Directory of Open Access Journals (Sweden)

    Moonen Marie

    2011-09-01

    Full Text Available Abstract Background Valve dysfunction is a common cardiovascular pathology. Despite significant clinical research, there is little formal study of how valve dysfunction affects overall circulatory dynamics. Validated models would offer the ability to better understand these dynamics and thus optimize diagnosis, as well as surgical and other interventions. Methods A cardiovascular and circulatory system (CVS model has already been validated in silico, and in several animal model studies. It accounts for valve dynamics using Heaviside functions to simulate a physiologically accurate "open on pressure, close on flow" law. However, it does not consider real-time valve opening dynamics and therefore does not fully capture valve dysfunction, particularly where the dysfunction involves partial closure. This research describes an updated version of this previous closed-loop CVS model that includes the progressive opening of the mitral valve, and is defined over the full cardiac cycle. Results Simulations of the cardiovascular system with healthy mitral valve are performed, and, the global hemodynamic behaviour is studied compared with previously validated results. The error between resulting pressure-volume (PV loops of already validated CVS model and the new CVS model that includes the progressive opening of the mitral valve is assessed and remains within typical measurement error and variability. Simulations of ischemic mitral insufficiency are also performed. Pressure-Volume loops, transmitral flow evolution and mitral valve aperture area evolution follow reported measurements in shape, amplitude and trends. Conclusions The resulting cardiovascular system model including mitral valve dynamics provides a foundation for clinical validation and the study of valvular dysfunction in vivo. The overall models and results could readily be generalised to other cardiac valves.

  16. NASTRAN thermal analyzer: Theory and application including a guide to modeling engineering problems, volume 2. [sample problem library guide

    Science.gov (United States)

    Jackson, C. E., Jr.

    1977-01-01

    A sample problem library containing 20 problems covering most facets of Nastran Thermal Analyzer modeling is presented. Areas discussed include radiative interchange, arbitrary nonlinear loads, transient temperature and steady-state structural plots, temperature-dependent conductivities, simulated multi-layer insulation, and constraint techniques. The use of the major control options and important DMAP alters is demonstrated.

  17. Simultaneous Bayesian inference for skew-normal semiparametric nonlinear mixed-effects models with covariate measurement errors.

    Science.gov (United States)

    Huang, Yangxin; Dagne, Getachew A

    2012-01-01

    Longitudinal data arise frequently in medical studies and it is a common practice to analyze such complex data with nonlinear mixed-effects (NLME) models which enable us to account for between-subject and within-subject variations. To partially explain the variations, covariates are usually introduced to these models. Some covariates, however, may be often measured with substantial errors. It is often the case that model random error is assumed to be distributed normally, but the normality assumption may not always give robust and reliable results, particularly if the data exhibit skewness. Although there has been considerable interest in accommodating either skewness or covariate measurement error in the literature, there is relatively little work that considers both features simultaneously. In this article, our objectives are to address simultaneous impact of skewness and covariate measurement error by jointly modeling the response and covariate processes under a general framework of Bayesian semiparametric nonlinear mixed-effects models. The method is illustrated in an AIDS data example to compare potential models which have different distributional specifications. The findings from this study suggest that the models with a skew-normal distribution may provide more reasonable results if the data exhibit skewness and/or have measurement errors in covariates.

  18. Imaging and Pathological Features of Percutaneous Cryosurgery on Normal Lung Evaluated in a Porcine Model

    Directory of Open Access Journals (Sweden)

    Lizhi NIU

    2010-07-01

    Full Text Available Background and objective Lung cancer is one of the most commonly occurring malignancies and frequent causes of death in the world. Cryoablation is a safe and alternative treatment for unresectable lung cancer. Due to the lung being gas-containing organ and different from solid organs such as liver and pancreas, it is difficult to achieve the freezing range of beyond the tumor edge 1 cm safety border. The aim of this study is to examine the effect of different numbers of freeze cycles on the effectiveness of cryoablation on normal lung tissue and to create an operation guideline that gives the best effect. Methods Six healthy Tibetan miniature pigs were given a CT scan and histological investigation after percutaneous cryosurgery. Cryoablation was performed as 2 cycles of 10 min of active freezing in the left lung; each freeze followed by a 5 min thaw. In the right lung, we performed the same 2 cycles of 5 min of freezing followed by 5 min of thawing. However, for the right lung, we included a third cycle of consisting of 10 min of freezing followed by 5 min of thawing. Three cryoprobes were inserted into the left lung and three cryoprobes in the right lung per animal, one in the upper and two in the lower lobe, so as to be well away from each other. Comparison under the same experimental condition was necessary. During the experiment, observations were made regarding the imaging change of ice-ball. The lungs were removed postoperatively at 3 intervals: 4 h, 3 d of postoperation and 7 d of postoperation, respectively, to view microscopic and pathological change. Results The ice-ball grew gradually in relation to the increase in time, and the increase in number of cycles. The size of the cryolesion (hypothesis necrotic area in specimens, over time, became larger in size than the size of the ice-ball during operation, regardless of whether 2 or 3 freeze-thaw cycles were performed. The area of necrosis was gradually increased over the course of time

  19. Recent Progress on Labfit: a Multispectrum Analysis Program for Fitting Lineshapes Including the Htp Model and Temperature Dependence

    Science.gov (United States)

    Cich, Matthew J.; Guillaume, Alexandre; Drouin, Brian; Benner, D. Chris

    2017-06-01

    Multispectrum analysis can be a challenge for a variety of reasons. It can be computationally intensive to fit a proper line shape model especially for high resolution experimental data. Band-wide analyses including many transitions along with interactions, across many pressures and temperatures are essential to accurately model, for example, atmospherically relevant systems. Labfit is a fast multispectrum analysis program originally developed by D. Chris Benner with a text-based interface. More recently at JPL a graphical user interface was developed with the goal of increasing the ease of use but also the number of potential users. The HTP lineshape model has been added to Labfit keeping it up-to-date with community standards. Recent analyses using labfit will be shown to demonstrate its ability to competently handle large experimental datasets, including high order lineshape effects, that are otherwise unmanageable.

  20. Evaluation of factors in development of Vis/NIR spectroscopy models for discriminating PSE, DFD and normal broiler breast meat

    Science.gov (United States)

    1. To evaluate the performance of visible and near-infrared (Vis/NIR) spectroscopic models for discriminating true pale, soft and exudative (PSE), normal and dark, firm and dry (DFD) broiler breast meat in different conditions of preprocessing methods, spectral ranges, characteristic wavelength sele...

  1. On the Training Model of China's Local Normal University Students during the Transitional Period from the Perspective of Happiness Management

    Science.gov (United States)

    Weiwei, Huang

    2016-01-01

    As a theory based on the hypothesis of "happy man" about human nature, happiness management plays a significant guiding role in the optimization of the training model of local Chinese normal university students during the transitional period. Under the guidance of this theory, China should adhere to the people-oriented principle,…

  2. External validation of a normal tissue complication probability model for radiation-induced hypothyroidism in an independent cohort

    DEFF Research Database (Denmark)

    Rønjom, Marianne F; Brink, Carsten; Bentzen, Søren M

    2015-01-01

    BACKGROUND: A normal tissue complication probability (NTCP) model for radiation-induced hypothyroidism (RIHT) was previously derived in patients with squamous cell carcinoma of the head and neck (HNSCC) discerning thyroid volume (Vthyroid), mean thyroid dose (Dmean), and latency as predictive...

  3. Using direct normal irradiance models and utility electrical loading to assess benefit of a concentrating solar power plant

    Science.gov (United States)

    Direct normal irradiance (DNI) is required to evaluate performance of concentrating solar energy systems. The objective of this paper is to analyze the effect of time interval (e.g. year, month, hour) on the accuracy of three different DNI models. The DNI data were measured at three different labora...

  4. Application of a new combined model including radiological indicators to predict difficult airway in patients undergoing surgery for cervical spondylosis.

    Science.gov (United States)

    Xu, Mao; Li, Xiaoxi; Wang, Jun; Guo, Xiangyang

    2014-01-01

    Airway management is crucial in clinical anesthesia. Many complications associated with airway management result from unexpected difficult airway, but predicting a difficult airway is a major challenge. We investigated the efficacy of a new combined model including radiological indicators to predict difficult airway in patients undergoing surgery for cervical spondylosis, a population with a high incidence of difficult airway. We randomly enrolled 303 patients scheduled for elective surgery for cervical spondylosis at Peking University Third Hospital between August 2012 and March 2013. Preoperatively, patients were evaluated for difficult airway according to a clinical index and parameters on lateral cervical radiographs and magnetic resonance images. Difficult airway was defined as Cormack-Lehane grades III-IV. Logistic regression was used to identify a combined (clinical and radiological) model for difficult airway. A receiver operating characteristic (ROC) curve was used to describe the effectiveness of prediction. We identified three clinical predictive factors using the ROC curve: mouth opening, sternomental distance, and neck mobility. We created a clinical model using three factors: gender, age, and mouth opening, with odds ratios (OR) of 0.370, 1.034, and 0.358, respectively. Using the clinical and radiological parameters, we formulated a combined model with five risk factors: gender, mouth opening, atlanto-occipital gap, the angle from the second to sixth cervical vertebraes in the neutral position, and the angle difference of d (the angle between the laryngeal axis and the epiglottic axis) from the neutral position to extension (OR: 0.107, 0.355, 0.846, 1.057, and 0.952, respectively). The sensitivity and specificity of the combined model were 80.0% and 65.7%, respectively, and the ROC curve confirmed that the combined model was better than any single clinical predictor and the clinical model. The efficacy of the combined model including both clinical and

  5. Normal social seeking behavior, hypoactivity and reduced exploratory range in a mouse model of Angelman syndrome

    Directory of Open Access Journals (Sweden)

    Reiter Lawrence T

    2011-01-01

    Full Text Available Abstract Background Angelman syndrome (AS is a neurogenetic disorder characterized by severe developmental delay with mental retardation, a generally happy disposition, ataxia and characteristic behaviors such as inappropriate laughter, social-seeking behavior and hyperactivity. The majority of AS cases are due to loss of the maternal copy of the UBE3A gene. Maternal Ube3a deficiency (Ube3am-/p+, as well as complete loss of Ube3a expression (Ube3am-/p-, have been reproduced in the mouse model used here. Results Here we asked if two characteristic AS phenotypes - social-seeking behavior and hyperactivity - are reproduced in the Ube3a deficient mouse model of AS. We quantified social-seeking behavior as time spent in close proximity to a stranger mouse and activity as total time spent moving during exploration, movement speed and total length of the exploratory path. Mice of all three genotypes (Ube3am+/p+, Ube3am-/p+, Ube3am-/p- were tested and found to spend the same amount of time in close proximity to the stranger, indicating that Ube3a deficiency in mice does not result in increased social seeking behavior or social dis-inhibition. Also, Ube3a deficient mice were hypoactive compared to their wild-type littermates as shown by significantly lower levels of activity, slower movement velocities, shorter exploratory paths and a reduced exploratory range. Conclusions Although hyperactivity and social-seeking behavior are characteristic phenotypes of Angelman Syndrome in humans, the Ube3a deficient mouse model does not reproduce these phenotypes in comparison to their wild-type littermates. These phenotypic differences may be explained by differences in the size of the genetic defect as ~70% of AS patients have a deletion that includes several other genes surrounding the UBE3A locus.

  6. LEMming: A Linear Error Model to Normalize Parallel Quantitative Real-Time PCR (qPCR Data as an Alternative to Reference Gene Based Methods.

    Directory of Open Access Journals (Sweden)

    Ronny Feuer

    Full Text Available Gene expression analysis is an essential part of biological and medical investigations. Quantitative real-time PCR (qPCR is characterized with excellent sensitivity, dynamic range, reproducibility and is still regarded to be the gold standard for quantifying transcripts abundance. Parallelization of qPCR such as by microfluidic Taqman Fluidigm Biomark Platform enables evaluation of multiple transcripts in samples treated under various conditions. Despite advanced technologies, correct evaluation of the measurements remains challenging. Most widely used methods for evaluating or calculating gene expression data include geNorm and ΔΔCt, respectively. They rely on one or several stable reference genes (RGs for normalization, thus potentially causing biased results. We therefore applied multivariable regression with a tailored error model to overcome the necessity of stable RGs.We developed a RG independent data normalization approach based on a tailored linear error model for parallel qPCR data, called LEMming. It uses the assumption that the mean Ct values within samples of similarly treated groups are equal. Performance of LEMming was evaluated in three data sets with different stability patterns of RGs and compared to the results of geNorm normalization. Data set 1 showed that both methods gave similar results if stable RGs are available. Data set 2 included RGs which are stable according to geNorm criteria, but became differentially expressed in normalized data evaluated by a t-test. geNorm-normalized data showed an effect of a shifted mean per gene per condition whereas LEMming-normalized data did not. Comparing the decrease of standard deviation from raw data to geNorm and to LEMming, the latter was superior. In data set 3 according to geNorm calculated average expression stability and pairwise variation, stable RGs were available, but t-tests of raw data contradicted this. Normalization with RGs resulted in distorted data contradicting

  7. The normal anterior cruciate ligament as a model for tensioning strategies in anterior cruciate ligament grafts.

    NARCIS (Netherlands)

    Arnold, M.P.; Verdonschot, N.J.J.; Kampen, A. van

    2005-01-01

    BACKGROUND: There is some confusion about the relationship between the tension placed on the graft and the joint position used in the fixation of anterior cruciate ligament grafts. This is because of deficiency in accurate basic science about this important interaction in the normal and

  8. Recombinant human endostatin normalizes tumor vasculature and enhances radiation response in xenografted human nasopharyngeal carcinoma models.

    Directory of Open Access Journals (Sweden)

    Fang Peng

    Full Text Available BACKGROUND: Hypoxic tumor cells can reduce the efficacy of radiation. Antiangiogenic therapy may transiently "normalize" the tumor vasculature to make it more efficient for oxygen delivery. The aim of this study is to investigate whether the recombinant human endostatin (endostar can create a "vascular normalization window" to alleviate hypoxia and enhance the inhibitory effects of radiation therapy in human nasopharyngeal carcinoma (NPC in mice. METHODOLOGY/PRINCIPAL FINDINGS: Transient changes in morphology of tumor vasculature and hypoxic tumor cell fraction in response to endostar were detected in mice bearing CNE-2 and 5-8F human NPC xenografts. Various treatment schedules were tested to assess the influence of endostar on the effect of radiation therapy. Several important factors relevant to the angiogenesis were identified through immunohistochemical staining. During endostar treatment, tumor vascularity decreased, while the basement membrane and pericyte coverage associated with endothelial cells increased, which supported the idea of vessel normalization. Hypoxic tumor cell fraction also decreased after the treatment. The transient modulation of tumor physiology caused by endostar improved the effect of radiation treatment compared with other treatment schedules. The expressions of vascular endothelial growth factor (VEGF, matrix metalloproteinase-2 (MMP-2, MMP-9, and MMP-14 decreased, while the level of pigment epithelium-derived factor (PEDF increased. CONCLUSIONS: Endostar normalized tumor vasculature, which alleviated hypoxia and significantly sensitized the function of radiation in anti-tumor in human NPC. The results provide an important experimental basis for combining endostar with radiation therapy in human NPC.

  9. Anisotropy of Earth's inner core intrinsic attenuation from seismic normal mode models

    NARCIS (Netherlands)

    Mäkinen, Anna M.; Deuss, Arwen; Redfern, Simon A T

    2014-01-01

    The Earth's inner core, the slowly growing sphere of solid iron alloy at the centre of our planet, is known to exhibit seismic anisotropy. Both normal mode and body wave studies have established that, when the global average is taken, compressional waves propagate faster in the North-South direction

  10. The normal anterior cruciate ligament as a model for tensioning strategies in anterior cruciate ligament grafts

    NARCIS (Netherlands)

    Arnold, MP; Verdonschot, N; van Kampen, A

    Background: There is some confusion about the relationship between the tension placed on the graft and the joint position used in the fixation of anterior cruciate ligament grafts. This is because of deficiency in accurate basic science about this important interaction in the normal and

  11. Inferred joint multigram models for medical term normalization according to ICD.

    Science.gov (United States)

    Pérez, Alicia; Atutxa, Aitziber; Casillas, Arantza; Gojenola, Koldo; Sellart, Álvaro

    2018-02-01

    Electronic Health Records (EHRs) are written using spontaneous natural language. Often, terms do not match standard terminology like the one available through the International Classification of Diseases (ICD). Information retrieval and exchange can be improved using standard terminology. Our aim is to render diagnostic terms written in spontaneous language in EHRs into the standard framework provided by the ICD. We tackle diagnostic term normalization employing Weighted Finite-State Transducers (WFSTs). These machines learn how to translate sequences, in the case of our concern, spontaneous representations into standard representations given a set of samples. They are highly flexible and easily adaptable to terminological singularities of each different hospital and practitioner. Besides, we implemented a similarity metric to enhance spontaneous-standard term matching. From the 2850 spontaneous DTs randomly selected we found that only 7.71% were written in their standard form matching the ICD. This WFST-based system enabled matching spontaneous ICDs with a Mean Reciprocal Rank of 0.68, which means that, on average, the right ICD code is found between the first and second position among the normalized set of candidates. This guarantees efficient document exchange and, furthermore, information retrieval. Medical term normalization was achieved with high performance. We found that direct matching of spontaneous terms using standard lexicons leads to unsatisfactory results while normalized hypothesis generation by means of WFST helped to overcome the gap between spontaneous and standard language. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A coupled oscillator model describes normal and strange zooplankton swimming behaviour

    NARCIS (Netherlands)

    Ringelberg, J.; Lingeman, R.

    2003-01-01

    "Normal" swimming in marine and freshwater zooplankton is often intermittent with active upward and more passive downward displacements. In the freshwater cladoceran Daphnia, the pattern is sometimes regular enough to demonstrate the presence of a rhythm. Abnormal swimming patterns were also

  13. Tracer experiment data sets for the verification of local and meso-scale atmospheric dispersion models including topographic effects

    International Nuclear Information System (INIS)

    Sartori, E.; Schuler, W.

    1992-01-01

    Software and data for nuclear energy applications are acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article proposes more specifically a scheme for acquiring, storing and distributing atmospheric tracer experiment data (ATE) required for verification of atmospheric dispersion models especially the most advanced ones including topographic effects and specific to the local and meso-scale. These well documented data sets will form a valuable complement to the set of atmospheric dispersion computer codes distributed internationally. Modellers will be able to gain confidence in the predictive power of their models or to verify their modelling skills. (au)

  14. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    International Nuclear Information System (INIS)

    Marsolat, F; De Marzi, L; Mazal, A; Pouzoulet, F

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec , for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec . The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm −1 . These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis. (paper)

  15. Finite Element Modeling and Analysis of Nonlinear Impact and Frictional Motion Responses Including Fluid—Structure Coupling Effects

    Directory of Open Access Journals (Sweden)

    Yong Zhao

    1997-01-01

    Full Text Available A nonlinear three dimensional (3D single rack model and a nonlinear 3D whole pool multi-rack model are developed for the spent fuel storage racks of a nuclear power plant (NPP to determine impacts and frictional motion responses when subjected to 3D excitations from the supporting building floor. The submerged free standing rack system and surrounding water are coupled due to hydrodynamic fluid-structure interaction (FSI using potential theory. The models developed have features that allow consideration of geometric and material nonlinearities including (1 the impacts of fuel assemblies to rack cells, a rack to adjacent racks or pool walls, and rack support legs to the pool floor; (2 the hydrodynamic coupling of fuel assemblies with their storing racks, and of a rack with adjacent racks, pool walls, and the pool floor; and (3 the dynamic motion behavior of rocking, twisting, and frictional sliding of rack modules. Using these models 3D nonlinear time history dynamic analyses are performed per the U.S. Nuclear Regulatory Commission (USNRC criteria. Since few such modeling, analyses, and results using both the 3D single and whole pool multiple rack models are available in the literature, this paper emphasizes description of modeling and analysis techniques using the SOLVIA general purpose nonlinear finite element code. Typical response results with different Coulomb friction coefficients are presented and discussed.

  16. Extensions of the Rosner-Colditz breast cancer prediction model to include older women and type-specific predicted risk.

    Science.gov (United States)

    Glynn, Robert J; Colditz, Graham A; Tamimi, Rulla M; Chen, Wendy Y; Hankinson, Susan E; Willett, Walter W; Rosner, Bernard

    2017-08-01

    A breast cancer risk prediction rule previously developed by Rosner and Colditz has reasonable predictive ability. We developed a re-fitted version of this model, based on more than twice as many cases now including women up to age 85, and further extended it to a model that distinguished risk factor prediction of tumors with different estrogen/progesterone receptor status. We compared the calibration and discriminatory ability of the original, the re-fitted, and the type-specific models. Evaluation used data from the Nurses' Health Study during the period 1980-2008, when 4384 incident invasive breast cancers occurred over 1.5 million person-years. Model development used two-thirds of study subjects and validation used one-third. Predicted risks in the validation sample from the original and re-fitted models were highly correlated (ρ = 0.93), but several parameters, notably those related to use of menopausal hormone therapy and age, had different estimates. The re-fitted model was well-calibrated and had an overall C-statistic of 0.65. The extended, type-specific model identified several risk factors with varying associations with occurrence of tumors of different receptor status. However, this extended model relative to the prediction of any breast cancer did not meaningfully reclassify women who developed breast cancer to higher risk categories, nor women remaining cancer free to lower risk categories. The re-fitted Rosner-Colditz model has applicability to risk prediction in women up to age 85, and its discrimination is not improved by consideration of varying associations across tumor subtypes.

  17. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  18. Geochemical Model And Normalization For Heavy Metals In The Bottom Sediments, Dubai, UAE.

    OpenAIRE

    El Sammak, A.A. [عمرو عبدالعزيز السماك

    1999-01-01

    Dubai creek can be considered as the focal point of Dubai. It has great importance for trading and aesthetic values. Total and leachable heavy metals (Cd, Co, Ni, Pb and Zn), organic carbon and total carbonate were studied in the bottom sediments of the creek. Pollution Load Index, statistical analysis, were used in order to quantify the pollution load as well as to discriminate the data into significant groups. Normalization of the data using organic carbon and total carbonate was done in or...

  19. The impact of Roux-en-Y gastric bypass surgery on normal metabolism in a porcine model.

    Science.gov (United States)

    Lindqvist, Andreas; Ekelund, Mikael; Garcia-Vaz, Eliana; Ståhlman, Marcus; Pierzynowski, Stefan; Gomez, Maria F; Rehfeld, Jens F; Groop, Leif; Hedenbro, Jan; Wierup, Nils; Spégel, Peter

    2017-01-01

    A growing body of literature on Roux-en-Y gastric bypass surgery (RYGB) has generated inconclusive results on the mechanism underlying the beneficial effects on weight loss and glycaemia, partially due to the problems of designing clinical studies with the appropriate controls. Moreover, RYGB is only performed in obese individuals, in whom metabolism is perturbed and not completely understood. In an attempt to isolate the effects of RYGB and its effects on normal metabolism, we investigated the effect of RYGB in lean pigs, using sham-operated pair-fed pigs as controls. Two weeks post-surgery, pigs were subjected to an intravenous glucose tolerance test (IVGTT) and circulating metabolites, hormones and lipids measured. Bile acid composition was profiled after extraction from blood, faeces and the gallbladder. A similar weight development in both groups of pigs validated our experimental model. Despite similar changes in fasting insulin, RYGB-pigs had lower fasting glucose levels. During an IVGTT RYGB-pigs had higher insulin and lower glucose levels. VLDL and IDL were lower in RYGB- than in sham-pigs. RYGB-pigs had increased levels of most amino acids, including branched-chain amino acids, but these were more efficiently suppressed by glucose. Levels of bile acids in the gallbladder were higher, whereas plasma and faecal bile acid levels were lower in RYGB- than in sham-pigs. In a lean model RYGB caused lower plasma lipid and bile acid levels, which were compensated for by increased plasma amino acids, suggesting a switch from lipid to protein metabolism during fasting in the immediate postoperative period.

  20. The impact of Roux-en-Y gastric bypass surgery on normal metabolism in a porcine model.

    Directory of Open Access Journals (Sweden)

    Andreas Lindqvist

    Full Text Available A growing body of literature on Roux-en-Y gastric bypass surgery (RYGB has generated inconclusive results on the mechanism underlying the beneficial effects on weight loss and glycaemia, partially due to the problems of designing clinical studies with the appropriate controls. Moreover, RYGB is only performed in obese individuals, in whom metabolism is perturbed and not completely understood.In an attempt to isolate the effects of RYGB and its effects on normal metabolism, we investigated the effect of RYGB in lean pigs, using sham-operated pair-fed pigs as controls. Two weeks post-surgery, pigs were subjected to an intravenous glucose tolerance test (IVGTT and circulating metabolites, hormones and lipids measured. Bile acid composition was profiled after extraction from blood, faeces and the gallbladder.A similar weight development in both groups of pigs validated our experimental model. Despite similar changes in fasting insulin, RYGB-pigs had lower fasting glucose levels. During an IVGTT RYGB-pigs had higher insulin and lower glucose levels. VLDL and IDL were lower in RYGB- than in sham-pigs. RYGB-pigs had increased levels of most amino acids, including branched-chain amino acids, but these were more efficiently suppressed by glucose. Levels of bile acids in the gallbladder were higher, whereas plasma and faecal bile acid levels were lower in RYGB- than in sham-pigs.In a lean model RYGB caused lower plasma lipid and bile acid levels, which were compensated for by increased plasma amino acids, suggesting a switch from lipid to protein metabolism during fasting in the immediate postoperative period.

  1. Nonlocal continuum-based modeling of breathing mode of nanowires including surface stress and surface inertia effects

    International Nuclear Information System (INIS)

    Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad; Rafii-Tabar, Hashem

    2014-01-01

    Nonlocal and surface effects significantly influence the mechanical response of nanomaterials and nanostructures. In this work, the breathing mode of a circular nanowire is studied on the basis of the nonlocal continuum model. Both the surface elastic properties and surface inertia effect are included. Nanowires can be modeled as long cylindrical solid objects. The classical model is reformulated using the nonlocal differential constitutive relations of Eringen and Gurtin–Murdoch surface continuum elasticity formalism. A new frequency equation for the breathing mode of nanowires, including small scale effect, surface stress and surface inertia is presented by employing the Bessel functions. Numerical results are computed, and are compared to confirm the validity and accuracy of the proposed method. Furthermore, the model is used to elucidate the effect of nonlocal parameter, the surface stress, the surface inertia and the nanowire orientation on the breathing mode of several types of nanowires with size ranging from 0.5 to 4 nm. Our results reveal that the combined surface and small scale effects are significant for nanowires with diameter smaller than 4 nm.

  2. Nonlocal continuum-based modeling of breathing mode of nanowires including surface stress and surface inertia effects

    Science.gov (United States)

    Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad; Rafii-Tabar, Hashem

    2014-05-01

    Nonlocal and surface effects significantly influence the mechanical response of nanomaterials and nanostructures. In this work, the breathing mode of a circular nanowire is studied on the basis of the nonlocal continuum model. Both the surface elastic properties and surface inertia effect are included. Nanowires can be modeled as long cylindrical solid objects. The classical model is reformulated using the nonlocal differential constitutive relations of Eringen and Gurtin-Murdoch surface continuum elasticity formalism. A new frequency equation for the breathing mode of nanowires, including small scale effect, surface stress and surface inertia is presented by employing the Bessel functions. Numerical results are computed, and are compared to confirm the validity and accuracy of the proposed method. Furthermore, the model is used to elucidate the effect of nonlocal parameter, the surface stress, the surface inertia and the nanowire orientation on the breathing mode of several types of nanowires with size ranging from 0.5 to 4 nm. Our results reveal that the combined surface and small scale effects are significant for nanowires with diameter smaller than 4 nm.

  3. One Dimensional Analysis Model of a Condensing Spray Chamber Including Rocket Exhaust Using SINDA/FLUINT and CEA

    Science.gov (United States)

    Sakowski, Barbara; Edwards, Daryl; Dickens, Kevin

    2014-01-01

    Modeling droplet condensation via CFD codes can be very tedious, time consuming, and inaccurate. CFD codes may be tedious and time consuming in terms of using Lagrangian particle tracking approaches or particle sizing bins. Also since many codes ignore conduction through the droplet and or the degradating effect of heat and mass transfer if noncondensible species are present, the solutions may be inaccurate. The modeling of a condensing spray chamber where the significant size of the water droplets and the time and distance these droplets take to fall, can make the effect of droplet conduction a physical factor that needs to be considered in the model. Furthermore the presence of even a relatively small amount of noncondensible has been shown to reduce the amount of condensation [Ref 1]. It is desirable then to create a modeling tool that addresses these issues. The path taken to create such a tool is illustrated. The application of this tool and subsequent results are based on the spray chamber in the Spacecraft Propulsion Research Facility (B2) located at NASA's Plum Brook Station that tested an RL-10 engine. The platform upon which the condensation physics is modeled is SINDAFLUINT. The use of SINDAFLUINT enables the ability to model various aspects of the entire testing facility, including the rocket exhaust duct flow and heat transfer to the exhaust duct wall. The ejector pumping system of the spray chamber is also easily implemented via SINDAFLUINT. The goal is to create a transient one dimensional flow and heat transfer model beginning at the rocket, continuing through the condensing spray chamber, and finally ending with the ejector pumping system. However the model of the condensing spray chamber may be run independently of the rocket and ejector systems detail, with only appropriate mass flow boundary conditions placed at the entrance and exit of the condensing spray chamber model. The model of the condensing spray chamber takes into account droplet

  4. A comparison of different ways of including baseline counts in negative binomial models for data from falls prevention trials.

    Science.gov (United States)

    Zheng, Han; Kimber, Alan; Goodwin, Victoria A; Pickering, Ruth M

    2018-01-01

    A common design for a falls prevention trial is to assess falling at baseline, randomize participants into an intervention or control group, and ask them to record the number of falls they experience during a follow-up period of time. This paper addresses how best to include the baseline count in the analysis of the follow-up count of falls in negative binomial (NB) regression. We examine the performance of various approaches in simulated datasets where both counts are generated from a mixed Poisson distribution with shared random subject effect. Including the baseline count after log-transformation as a regressor in NB regression (NB-logged) or as an offset (NB-offset) resulted in greater power than including the untransformed baseline count (NB-unlogged). Cook and Wei's conditional negative binomial (CNB) model replicates the underlying process generating the data. In our motivating dataset, a statistically significant intervention effect resulted from the NB-logged, NB-offset, and CNB models, but not from NB-unlogged, and large, outlying baseline counts were overly influential in NB-unlogged but not in NB-logged. We conclude that there is little to lose by including the log-transformed baseline count in standard NB regression compared to CNB for moderate to larger sized datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Normal and Pathological NCAT Image and Phantom Data Based on Physiologically Realistic Left Ventricle Finite-Element Models

    International Nuclear Information System (INIS)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui, Benjamin M.W.; Gullberg, Grant T.

    2006-01-01

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, which provides a realistic model of the normal human anatomy and cardiac and respiratory motions, is used in medical imaging research to evaluate and improve imaging devices and techniques, especially dynamic cardiac applications. One limitation of the phantom is that it lacks the ability to accurately simulate altered functions of the heart that result from cardiac pathologies such as coronary artery disease (CAD). The goal of this work was to enhance the 4D NCAT phantom by incorporating a physiologically based, finite-element (FE) mechanical model of the left ventricle (LV) to simulate both normal and abnormal cardiac motions. The geometry of the FE mechanical model was based on gated high-resolution x-ray multi-slice computed tomography (MSCT) data of a healthy male subject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees at the epicardial surface, through 0 degrees at the mid-wall, to 90 degrees at the endocardial surface. A time varying elastance model was used to simulate fiber contraction, and physiological intraventricular systolic pressure-time curves were applied to simulate the cardiac motion over the entire cardiac cycle. To demonstrate the ability of the FE mechanical model to accurately simulate the normal cardiac motion as well abnormal motions indicative of CAD, a normal case and two pathologic cases were simulated and analyzed. In the first pathologic model, a subendocardial anterior ischemic region was defined. A second model was created with a transmural ischemic region defined in the same location. The FE based deformations were incorporated into the 4D NCAT cardiac model through the control points that define the cardiac structures in the phantom which were set to move according to the predictions of the mechanical model. A simulation study was performed using the FE-NCAT combination to investigate how the

  6. Toward a normalized clinical drug knowledge base in China-applying the RxNorm model to Chinese clinical drugs.

    Science.gov (United States)

    Wang, Li; Zhang, Yaoyun; Jiang, Min; Wang, Jingqi; Dong, Jiancheng; Liu, Yun; Tao, Cui; Jiang, Guoqian; Zhou, Yi; Xu, Hua

    2018-04-04

    In recent years, electronic health record systems have been widely implemented in China, making clinical data available electronically. However, little effort has been devoted to making drug information exchangeable among these systems. This study aimed to build a Normalized Chinese Clinical Drug (NCCD) knowledge base, by applying and extending the information model of RxNorm to Chinese clinical drugs. Chinese drugs were collected from 4 major resources-China Food and Drug Administration, China Health Insurance Systems, Hospital Pharmacy Systems, and China Pharmacopoeia-for integration and normalization in NCCD. Chemical drugs were normalized using the information model in RxNorm without much change. Chinese patent drugs (i.e., Chinese herbal extracts), however, were represented using an expanded RxNorm model to incorporate the unique characteristics of these drugs. A hybrid approach combining automated natural language processing technologies and manual review by domain experts was then applied to drug attribute extraction, normalization, and further generation of drug names at different specification levels. Lastly, we reported the statistics of NCCD, as well as the evaluation results using several sets of randomly selected Chinese drugs. The current version of NCCD contains 16 976 chemical drugs and 2663 Chinese patent medicines, resulting in 19 639 clinical drugs, 250 267 unique concepts, and 2 602 760 relations. By manual review of 1700 chemical drugs and 250 Chinese patent drugs randomly selected from NCCD (about 10%), we showed that the hybrid approach could achieve an accuracy of 98.60% for drug name extraction and normalization. Using a collection of 500 chemical drugs and 500 Chinese patent drugs from other resources, we showed that NCCD achieved coverages of 97.0% and 90.0% for chemical drugs and Chinese patent drugs, respectively. Evaluation results demonstrated the potential to improve interoperability across various electronic drug systems

  7. Modeling the effects of age and gender on normal pediatric brain metabolism using F18-FDG PET/CT.

    Science.gov (United States)

    Turpin, Sophie; Martineau, Patrick J; Levasseur, Marc-André; Lambert, Raymond

    2017-12-28

    Normal databases of pediatric brain metabolism are uncommon, as local brain metabolism evolves significantly with age throughout childhood, limiting their clinical applicability. The aim of this study was to develop mathematical models of regional relative brain metabolism (RRBM) using pediatric F18-FluoroDeoxyGlucose (FDG) Positron Emission Tomography (PET) with Computed Tomography (CT) data of normal pediatric brains, accounting for gender and age. Methods: PET/CT brain acquisitions were obtained from 88 neurologically-normal subjects, aged 6 months to 18 years. Subjects were assigned to either development ( n = 59) or validation groups ( n = 29). For each subject, commercially available software (NeuroQTM) was used to quantify the relative metabolism of 47 separate brain regions using whole-brain normalized (WBN) and pons normalized (PN) activity. The effects of age on RRBM were modeled using multiple linear and non-linear mathematical equations and the significance of gender was assessed using the Student t-test. Optimal models were selected using the Akaike Information Criterion. Mean predicted values and 95% prediction intervals were derived for all regions. Model predictions were compared to the validation data set and mean predicted error was calculated for all regions using both WBN and PN models. Results: As a function of age, optimal models of RRBM were linear for 7 regions, quadratic for 13, cubic for 6, logarithmic for 12, power for 7, and modified power laws for 4regions using WBN data; linearfor 9 regions, quadratic for 27, cubic for 2, logarithmic for 5 and power for 2 using PN data. Gender differences were found to be statistically significant only in the posterior cingulate cortex for the WBN data. Comparing our models to the validation group resulted in 94.3% of regions falling within the 95% prediction interval for WBN and 94.1% for PN. For all the brain regions in the validation group, the percentage of error in prediction was 3 ± 0.96% using

  8. Normal and Pathological NCAT Image and PhantomData Based onPhysiologically Realistic Left Ventricle Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui,Benjamin M.W.; Gullberg, Grant T.

    2006-08-02

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, whichprovides a realistic model of the normal human anatomy and cardiac andrespiratory motions, is used in medical imaging research to evaluate andimprove imaging devices and techniques, especially dynamic cardiacapplications. One limitation of the phantom is that it lacks the abilityto accurately simulate altered functions of the heart that result fromcardiac pathologies such as coronary artery disease (CAD). The goal ofthis work was to enhance the 4D NCAT phantom by incorporating aphysiologically based, finite-element (FE) mechanical model of the leftventricle (LV) to simulate both normal and abnormal cardiac motions. Thegeometry of the FE mechanical model was based on gated high-resolutionx-ray multi-slice computed tomography (MSCT) data of a healthy malesubject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees atthe epicardial surface, through 0 degreesat the mid-wall, to 90 degreesat the endocardial surface. A time varying elastance model was used tosimulate fiber contraction, and physiological intraventricular systolicpressure-time curves were applied to simulate the cardiac motion over theentire cardiac cycle. To demonstrate the ability of the FE mechanicalmodel to accurately simulate the normal cardiac motion as well abnormalmotions indicative of CAD, a normal case and two pathologic cases weresimulated and analyzed. In the first pathologic model, a subendocardialanterior ischemic region was defined. A second model was created with atransmural ischemic region defined in the same location. The FE baseddeformations were incorporated into the 4D NCAT cardiac model through thecontrol points that define the cardiac structures in the phantom whichwere set to move according to the predictions of the mechanical model. Asimulation study was performed using the FE-NCAT combination toinvestigate how the differences in contractile function

  9. Gamma-Normal-Gamma Mixture Model for Detecting Differentially Methylated Loci in Three Breast Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Joe Gray

    2007-01-01

    Full Text Available With state-of-the-art microarray technologies now available for whole genome CpG island (CGI methylation profiling, there is a need to develop statistical models that are specifically geared toward the analysis of such data. In this article, we propose a Gamma-Normal-Gamma (GNG mixture model for describing three groups of CGI loci: hypomethylated, undifferentiated, and hypermethylated, from a single methylation microarray. This model was applied to study the methylation signatures of three breast cancer cell lines: MCF7, T47D, and MDAMB361. Biologically interesting and interpretable results are obtained, which highlights the heterogeneity nature of the three cell lines. This underlies the premise for the need of analyzing each of the microarray slides individually as opposed to pooling them together for a single analysis. Our comparisons with the fitted densities from the Normal-Uniform (NU mixture model in the literature proposed for gene expression analysis show an improved goodness of fit of the GNG model over the NU model. Although the GNG model was proposed in the context of single-slide methylation analysis, it can be readily adapted to analyze multi-slide methylation data as well as other types of microarray data.

  10. A Model for the representation of Speech Signals in Normal and Impaired Ears

    DEFF Research Database (Denmark)

    Christiansen, Thomas Ulrich

    2004-01-01

    hearing was modelled as a combination of outer- and inner hair cell loss. The percentage of dead inner hair cells was calculated based on a new computational method relating auditory nerve fibre thresholds to behavioural thresholds. Finally, a model of the entire auditory nerve fibre population......A model of human auditory periphery, ranging from the outer ear to the auditory nerve, was developed. The model consists of the following components: outer ear transfer function, middle ear transfer function, basilar membrane velocity, inner hair cell receptor potential, inner hair cell probability...... of neurotransmitter release and auditory nerve fibre refractoriness. The model builds on previously published models, however, parameters for basilar membrane velocity and inner hair cell probability of neurotransmitter release were successfully fitted to model data from psychophysical and physiological data...

  11. Analysis of Two Stroke Marine Diesel Engine Operation Including Turbocharger Cut-Out by Using a Zero-Dimensional Model

    Directory of Open Access Journals (Sweden)

    Cong Guan

    2015-06-01

    Full Text Available In this article, the operation of a large two-stroke marine diesel engine including various cases with turbocharger cut-out was thoroughly investigated by using a modular zero-dimensional engine model built in MATLAB/Simulink environment. The model was developed by using as a basis an in-house modular mean value engine model, in which the existing cylinder block was replaced by a more detailed one that is capable of representing the scavenging ports-cylinder-exhaust valve processes. Simulation of the engine operation at steady state conditions was performed and the derived engine performance parameters were compared with the respective values obtained by the engine shop trials. The investigation of engine operation under turbocharger cut-out conditions in the region from 10% to 50% load was carried out and the influence of turbocharger cut-out on engine performance including the in-cylinder parameters was comprehensively studied. The recommended schedule for the combination of the turbocharger cut-out and blower activation was discussed for the engine operation under part load conditions. Finally, the influence of engine operating strategies on the annual fuel savings, CO2 emissions reduction and blower operating hours for a Panamax container ship operating at slow steaming conditions is presented and discussed.

  12. Mitochondrial base excision repair in mouse synaptosomes during normal aging and in a model of Alzheimer's disease

    DEFF Research Database (Denmark)

    Diaz, Ricardo Gredilla; Weissman, Lior; Yang, JL

    2012-01-01

    Brain aging is associated with synaptic decline and synaptic function is highly dependent on mitochondria. Increased levels of oxidative DNA base damage and accumulation of mitochondrial DNA (mtDNA) mutations or deletions lead to mitochondrial dysfunction, playing an important role in the aging...... process and the pathogenesis of several neurodegenerative diseases. Here we have investigated the repair of oxidative base damage, in synaptosomes of mouse brain during normal aging and in an AD model. During normal aging, a reduction in the base excision repair (BER) capacity was observed...... suggest that the age-related reduction in BER capacity in the synaptosomal fraction might contribute to mitochondrial and synaptic dysfunction during aging. The development of AD-like pathology in the 3xTgAD mouse model was, however, not associated with deficiencies of the BER mechanisms...

  13. Modelling the normal bouncing dynamics of spheres in a viscous fluid

    Directory of Open Access Journals (Sweden)

    Izard Edouard

    2017-01-01

    Full Text Available Bouncing motions of spheres in a viscous fluid are numerically investigated by an immersed boundary method to resolve the fluid flow around solids which is combined to a discrete element method for the particles motion and contact resolution. Two well-known configurations of bouncing are considered: the normal bouncing of a sphere on a wall in a viscous fluid and a normal particle-particle bouncing in a fluid. Previous experiments have shown the effective restitution coefficient to be a function of a single parameter, namely the Stokes number which compares the inertia of the solid particle with the fluid viscous dissipation. The present simulations show a good agreement with experimental observations for the whole range of investigated parameters. However, a new definition of the coefficient of restitution presented here shows a dependence on the Stokes number as in previous works but, in addition, on the fluid to particle density ratio. It allows to identify the viscous, inertial and dry regimes as found in experiments of immersed granular avalanches of Courrech du Pont et al. Phys. Rev. Lett. 90, 044301 (2003, e.g. in a multi-particle configuration.

  14. Adaptive fuzzy neural network control design via a T-S fuzzy model for a robot manipulator including actuator dynamics.

    Science.gov (United States)

    Wai, Rong-Jong; Yang, Zhi-Wei

    2008-10-01

    This paper focuses on the development of adaptive fuzzy neural network control (AFNNC), including indirect and direct frameworks for an n-link robot manipulator, to achieve high-precision position tracking. In general, it is difficult to adopt a model-based design to achieve this control objective due to the uncertainties in practical applications, such as friction forces, external disturbances, and parameter variations. In order to cope with this problem, an indirect AFNNC (IAFNNC) scheme and a direct AFNNC (DAFNNC) strategy are investigated without the requirement of prior system information. In these model-free control topologies, a continuous-time Takagi-Sugeno (T-S) dynamic fuzzy model with online learning ability is constructed to represent the system dynamics of an n-link robot manipulator. In the IAFNNC, an FNN estimator is designed to tune the nonlinear dynamic function vector in fuzzy local models, and then, the estimative vector is used to indirectly develop a stable IAFNNC law. In the DAFNNC, an FNN controller is directly designed to imitate a predetermined model-based stabilizing control law, and then, the stable control performance can be achieved by only using joint position information. All the IAFNNC and DAFNNC laws and the corresponding adaptive tuning algorithms for FNN weights are established in the sense of Lyapunov stability analyses to ensure the stable control performance. Numerical simulations and experimental results of a two-link robot manipulator actuated by dc servomotors are given to verify the effectiveness and robustness of the proposed methodologies. In addition, the superiority of the proposed control schemes is indicated in comparison with proportional-differential control, fuzzy-model-based control, T-S-type FNN control, and robust neural fuzzy network control systems.

  15. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    International Nuclear Information System (INIS)

    Darcel, C.; Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O.

    2009-11-01

    the other, addresses the issue of the nature of the transition. We develop a new 'mechanistic' model that could help in modeling why and where this transition can occur. The transition between both regimes would occur for a fracture length of 1-10 m and even at a smaller scale for the few outcrops that follow the self-similar density model. A consequence for the disposal issue is that the model that is likely to apply in the 'blind' scale window between 10-100 m is the self-similar model as it is defined for large-scale lineaments. The self-similar model, as it is measured for some outcrops and most lineament maps, is definitely worth being investigated as a reference for scales above 1-10 m. In the rest of the report, we develop a methodology for incorporating uncertainty and variability into the DFN modeling. Fracturing properties arise from complex processes which produce an intrinsic variability; characterizing this variability as an admissible variation of model parameter or as the division of the site into subdomains with distinct DFN models is a critical point of the modeling effort. Moreover, the DFN model encompasses a part of uncertainty, due to data inherent uncertainties and sampling limits. Both effects must be quantified and incorporated into the DFN site model definition process. In that context, all available borehole data including recording of fracture intercept positions, pole orientation and relative uncertainties are used as the basis for the methodological development and further site model assessment. An elementary dataset contains a set of discrete fracture intercepts from which a parent orientation/density distribution can be computed. The elementary bricks of the site, from which these initial parent density distributions are computed, rely on the former Single Hole Interpretation division of the boreholes into sections whose local boundaries are expected to reflect - locally - geology and fracturing properties main characteristics. From that

  16. Laboratory Studies of the Reactive Chemistry and Changing CCN Properties of Secondary Organic Aerosol, Including Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Scot Martin

    2013-01-31

    The chemical evolution of secondary-organic-aerosol (SOA) particles and how this evolution alters their cloud-nucleating properties were studied. Simplified forms of full Koehler theory were targeted, specifically forms that contain only those aspects essential to describing the laboratory observations, because of the requirement to minimize computational burden for use in integrated climate and chemistry models. The associated data analysis and interpretation have therefore focused on model development in the framework of modified kappa-Koehler theory. Kappa is a single parameter describing effective hygroscopicity, grouping together several separate physicochemical parameters (e.g., molar volume, surface tension, and van't Hoff factor) that otherwise must be tracked and evaluated in an iterative full-Koehler equation in a large-scale model. A major finding of the project was that secondary organic materials produced by the oxidation of a range of biogenic volatile organic compounds for diverse conditions have kappa values bracketed in the range of 0.10 +/- 0.05. In these same experiments, somewhat incongruently there was significant chemical variation in the secondary organic material, especially oxidation state, as was indicated by changes in the particle mass spectra. Taken together, these findings then support the use of kappa as a simplified yet accurate general parameter to represent the CCN activation of secondary organic material in large-scale atmospheric and climate models, thereby greatly reducing the computational burden while simultaneously including the most recent mechanistic findings of laboratory studies.

  17. Modelling the effect of acoustic waves on the thermodynamics and kinetics of phase transformation in a solution: Including mass transportation

    Science.gov (United States)

    Haqshenas, S. R.; Ford, I. J.; Saffari, N.

    2018-01-01

    Effects of acoustic waves on a phase transformation in a metastable phase were investigated in our previous work [S. R. Haqshenas, I. J. Ford, and N. Saffari, "Modelling the effect of acoustic waves on nucleation," J. Chem. Phys. 145, 024315 (2016)]. We developed a non-equimolar dividing surface cluster model and employed it to determine the thermodynamics and kinetics of crystallisation induced by an acoustic field in a mass-conserved system. In the present work, we developed a master equation based on a hybrid Szilard-Fokker-Planck model, which accounts for mass transportation due to acoustic waves. This model can determine the kinetics of nucleation and the early stage of growth of clusters including the Ostwald ripening phenomenon. It was solved numerically to calculate the kinetics of an isothermal sonocrystallisation process in a system with mass transportation. The simulation results show that the effect of mass transportation for different excitations depends on the waveform as well as the imposed boundary conditions and tends to be noticeable in the case of shock waves. The derivations are generic and can be used with any acoustic source and waveform.

  18. Design and modeling of an advanced marine machinery system including waste heat recovery and removal of sulphur oxides

    DEFF Research Database (Denmark)

    Frimann Nielsen, Rasmus; Haglind, Fredrik; Larsen, Ulrik

    2014-01-01

    the efficiency of machinery systems. The wet sulphuric acid process is an effective way of removing flue gas sulphur oxides from land-based coal-fired power plants. Moreover, organic Rankine cycles (ORC) are suitable for heat to power conversion for low temperature heat sources. This paper describes the design...... and modeling of a highly efficient machinery system which includes the removal of exhaust gas sulphur oxides. The system consists of a two-stroke diesel engine, the wet sulphuric process for sulphur removal, a conventional steam Rankine cycle and an ORC. Results of numerical modeling efforts suggest...... that an ORC placed after the conventional waste heat recovery system is able to extract the sulphuric acid from the exhaust gas, while at the same time increase the combined cycle thermal efficiency by 2.6%. The findings indicate that the technology has potential in marine applications regarding both energy...

  19. A Bayesian Model For The Estimation Of Latent Interaction And Quadratic Effects When Latent Variables Are Non-Normally Distributed.

    Science.gov (United States)

    Kelava, Augustin; Nagengast, Benjamin

    2012-09-01

    Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent predictor variables are nonnormally distributed. The nonnormal predictor distribution is approximated by a finite mixture distribution. We conduct a simulation study that demonstrates the advantages of the proposed Bayesian model over contemporary approaches (Latent Moderated Structural Equations [LMS], Quasi-Maximum-Likelihood [QML], and the extended unconstrained approach) when the latent predictor variables follow a nonnormal distribution. The conventional approaches show biased estimates of the nonlinear effects; the proposed Bayesian model provides unbiased estimates. We present an empirical example from work and stress research and provide syntax for substantive researchers. Advantages and limitations of the new model are discussed.

  20. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  1. In vivo modelling of normal and pathological human T-cell development

    NARCIS (Netherlands)

    Wiekmeijer, A.S.

    2016-01-01

    This thesis describes novel insights in human T-cell development by transplanting human HSPCs in severe immunodeficient NSG mice. First, an in vivo model was optimized to allow engraftment of hematopoietic stem cells derived from human bone marrow. This model was used to study aberrant human T-cell

  2. Cucker-Smale model with normalized communication weights and time delay

    KAUST Repository

    Choi, Young-Pil

    2017-03-06

    We study a Cucker-Smale-type system with time delay in which agents interact with each other through normalized communication weights. We construct a Lyapunov functional for the system and provide sufficient conditions for asymptotic flocking, i.e., convergence to a common velocity vector. We also carry out a rigorous limit passage to the mean-field limit of the particle system as the number of particles tends to infinity. For the resulting Vlasov-type equation we prove the existence, stability and large-time behavior of measure-valued solutions. This is, to our best knowledge, the first such result for a Vlasov-type equation with time delay. We also present numerical simulations of the discrete system with few particles that provide further insights into the flocking and oscillatory behaviors of the particle velocities depending on the size of the time delay.

  3. Short- and Mid-term Effects of Irreversible Electroporation on Normal Renal Tissue: An Animal Model

    Energy Technology Data Exchange (ETDEWEB)

    Wendler, J. J., E-mail: johann.wendler@med.ovgu.de; Porsch, M.; Huehne, S.; Baumunk, D. [University of Magdeburg, Department of Urology (Germany); Buhtz, P. [Institute of Pathology, University of Magdeburg (Germany); Fischbach, F.; Pech, M. [University of Magdeburg, Department of Radiology (Germany); Mahnkopf, D. [Institute of Medical Technology and Research (Germany); Kropf, S. [Institute of Biometry, University of Magdeburg (Germany); Roessner, A. [Institute of Pathology, University of Magdeburg (Germany); Ricke, J. [University of Magdeburg, Department of Radiology (Germany); Schostak, M.; Liehr, U.-B. [University of Magdeburg, Department of Urology (Germany)

    2013-04-15

    Irreversible electroporation (IRE) is a novel nonthermal tissue ablation technique by high current application leading to apoptosis without affecting extracellular matrix. Previous results of renal IRE shall be supplemented by functional MRI and differentiated histological analysis of renal parenchyma in a chronic treatment setting. Three swine were treated with two to three multifocal percutaneous IRE of the right kidney. MRI was performed before, 30 min (immediate-term), 7 days (short-term), and 28 days (mid-term) after IRE. A statistical analysis of the lesion surrounded renal parenchyma intensities was made to analyze functional differences depending on renal part, side and posttreatment time. Histological follow-up of cortex and medulla was performed after 28 days. A total of eight ablations were created. MRI showed no collateral damage of surrounded tissue. The highest visual contrast between lesions and normal parenchyma was obtained by T2-HR-SPIR-TSE-w sequence of DCE-MRI. Ablation zones showed inhomogeneous necroses with small perifocal edema in the short-term and sharp delimitable scars in the mid-term. MRI showed no significant differences between adjoined renal parenchyma around ablations and parenchyma of untreated kidney. Histological analysis demonstrated complete destruction of cortical glomeruli and tubules, while collecting ducts, renal calyxes, and pelvis of medulla were preserved. Adjoined kidney parenchyma around IRE lesions showed no qualitative differences to normal parenchyma of untreated kidney. This porcine IRE study reveals a multifocal renal ablation, while protecting surrounded renal parenchyma and collecting system over a mid-term period. That offers prevention of renal function ablating centrally located or multifocal renal masses.

  4. Kinematic Modeling of Normal Voluntary Mandibular Opening and Closing Velocity-Initial Study.

    Science.gov (United States)

    Gawriołek, Krzysztof; Gawriołek, Maria; Komosa, Marek; Piotrowski, Paweł R; Azer, Shereen S

    2015-06-01

    Determination and quantification of voluntary mandibular velocity movement has not been a thoroughly studied parameter of masticatory movement. This study attempted to objectively define kinematics of mandibular movement based on numerical (digital) analysis of the relations and interactions of velocity diagram records in healthy female individuals. Using a computerized mandibular scanner (K7 Evaluation Software), 72 diagrams of voluntary mandibular velocity movements (36 for opening, 36 for closing) for women with clinically normal motor and functional activities of the masticatory system were recorded. Multiple measurements were analyzed focusing on the curve for maximum velocity records. For each movement, the loop of temporary velocities was determined. The diagram was then entered into AutoCad calculation software where movement analysis was performed. The real maximum velocity values on opening (Vmax ), closing (V0 ), and average velocity values (Vav ) as well as movement accelerations (a) were recorded. Additionally, functional (A1-A2) and geometric (P1-P4) analysis of loop constituent phases were performed, and the relations between the obtained areas were defined. Velocity means and correlation coefficient values for various velocity phases were calculated. The Wilcoxon test produced the following maximum and average velocity results: Vmax = 394 ± 102, Vav = 222 ± 61 for opening, and Vmax = 409 ± 94, Vav = 225 ± 55 mm/s for closing. Both mandibular movement range and velocity change showed significant variability achieving the highest velocity in P2 phase. Voluntary mandibular velocity presents significant variations between healthy individuals. Maximum velocity is obtained when incisal separation is between 12.8 and 13.5 mm. An improved understanding of the patterns of normal mandibular movements may provide an invaluable diagnostic aid to pathological changes within the masticatory system. © 2014 by the American College of Prosthodontists.

  5. Short- and Mid-term Effects of Irreversible Electroporation on Normal Renal Tissue: An Animal Model

    International Nuclear Information System (INIS)

    Wendler, J. J.; Porsch, M.; Hühne, S.; Baumunk, D.; Buhtz, P.; Fischbach, F.; Pech, M.; Mahnkopf, D.; Kropf, S.; Roessner, A.; Ricke, J.; Schostak, M.; Liehr, U.-B.

    2013-01-01

    Irreversible electroporation (IRE) is a novel nonthermal tissue ablation technique by high current application leading to apoptosis without affecting extracellular matrix. Previous results of renal IRE shall be supplemented by functional MRI and differentiated histological analysis of renal parenchyma in a chronic treatment setting. Three swine were treated with two to three multifocal percutaneous IRE of the right kidney. MRI was performed before, 30 min (immediate-term), 7 days (short-term), and 28 days (mid-term) after IRE. A statistical analysis of the lesion surrounded renal parenchyma intensities was made to analyze functional differences depending on renal part, side and posttreatment time. Histological follow-up of cortex and medulla was performed after 28 days. A total of eight ablations were created. MRI showed no collateral damage of surrounded tissue. The highest visual contrast between lesions and normal parenchyma was obtained by T2-HR-SPIR-TSE-w sequence of DCE-MRI. Ablation zones showed inhomogeneous necroses with small perifocal edema in the short-term and sharp delimitable scars in the mid-term. MRI showed no significant differences between adjoined renal parenchyma around ablations and parenchyma of untreated kidney. Histological analysis demonstrated complete destruction of cortical glomeruli and tubules, while collecting ducts, renal calyxes, and pelvis of medulla were preserved. Adjoined kidney parenchyma around IRE lesions showed no qualitative differences to normal parenchyma of untreated kidney. This porcine IRE study reveals a multifocal renal ablation, while protecting surrounded renal parenchyma and collecting system over a mid-term period. That offers prevention of renal function ablating centrally located or multifocal renal masses.

  6. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    Energy Technology Data Exchange (ETDEWEB)

    Darcel, C. (Itasca Consultants SAS (France)); Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O. (Geosciences Rennes, UMR 6118 CNRS, Univ. def Rennes, Rennes (France))

    2009-11-15

    the lineament scale (k{sub t} = 2) on the other, addresses the issue of the nature of the transition. We develop a new 'mechanistic' model that could help in modeling why and where this transition can occur. The transition between both regimes would occur for a fracture length of 1-10 m and even at a smaller scale for the few outcrops that follow the self-similar density model. A consequence for the disposal issue is that the model that is likely to apply in the 'blind' scale window between 10-100 m is the self-similar model as it is defined for large-scale lineaments. The self-similar model, as it is measured for some outcrops and most lineament maps, is definitely worth being investigated as a reference for scales above 1-10 m. In the rest of the report, we develop a methodology for incorporating uncertainty and variability into the DFN modeling. Fracturing properties arise from complex processes which produce an intrinsic variability; characterizing this variability as an admissible variation of model parameter or as the division of the site into subdomains with distinct DFN models is a critical point of the modeling effort. Moreover, the DFN model encompasses a part of uncertainty, due to data inherent uncertainties and sampling limits. Both effects must be quantified and incorporated into the DFN site model definition process. In that context, all available borehole data including recording of fracture intercept positions, pole orientation and relative uncertainties are used as the basis for the methodological development and further site model assessment. An elementary dataset contains a set of discrete fracture intercepts from which a parent orientation/density distribution can be computed. The elementary bricks of the site, from which these initial parent density distributions are computed, rely on the former Single Hole Interpretation division of the boreholes into sections whose local boundaries are expected to reflect - locally - geology

  7. Mathematical modeling of HIV prevention measures including pre-exposure prophylaxis on HIV incidence in South Korea.

    Science.gov (United States)

    Kim, Sun Bean; Yoon, Myoungho; Ku, Nam Su; Kim, Min Hyung; Song, Je Eun; Ahn, Jin Young; Jeong, Su Jin; Kim, Changsoo; Kwon, Hee-Dae; Lee, Jeehyun; Smith, Davey M; Choi, Jun Yong

    2014-01-01

    Multiple prevention measures have the possibility of impacting HIV incidence in South Korea, including early diagnosis, early treatment, and pre-exposure prophylaxis (PrEP). We investigated how each of these interventions could impact the local HIV epidemic, especially among men who have sex with men (MSM), who have become the major risk group in South Korea. A mathematical model was used to estimate the effects of each these interventions on the HIV epidemic in South Korea over the next 40 years, as compared to the current situation. We constructed a mathematical model of HIV infection among MSM in South Korea, dividing the MSM population into seven groups, and simulated the effects of early antiretroviral therapy (ART), early diagnosis, PrEP, and combination interventions on the incidence and prevalence of HIV infection, as compared to the current situation that would be expected without any new prevention measures. Overall, the model suggested that the most effective prevention measure would be PrEP. Even though PrEP effectiveness could be lessened by increased unsafe sex behavior, PrEP use was still more beneficial than the current situation. In the model, early diagnosis of HIV infection was also effectively decreased HIV incidence. However, early ART did not show considerable effectiveness. As expected, it would be most effective if all interventions (PrEP, early diagnosis and early treatment) were implemented together. This model suggests that PrEP and early diagnosis could be a very effective way to reduce HIV incidence in South Korea among MSM.

  8. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  9. A weighted mean shift, normalized cuts initialized color gradient based geodesic active contour model: applications to histopathology image segmentation

    Science.gov (United States)

    Xu, Jun; Janowczyk, Andrew; Chandran, Sharat; Madabhushi, Anant

    2010-03-01

    While geodesic active contours (GAC) have become very popular tools for image segmentation, they are sensitive to model initialization. In order to get an accurate segmentation, the model typically needs to be initialized very close to the true object boundary. Apart from accuracy, automated initialization of the objects of interest is an important pre-requisite to being able to run the active contour model on very large images (such as those found in digitized histopathology). A second limitation of GAC model is that the edge detector function is based on gray scale gradients; color images typically being converted to gray scale prior to computing the gradient. For color images, however, the gray scale gradient results in broken edges and weak boundaries, since the other channels are not exploited for the gradient determination. In this paper we present a new geodesic active contour model that is driven by an accurate and rapid object initialization scheme-weighted mean shift normalized cuts (WNCut). WNCut draws its strength from the integration of two powerful segmentation strategies-mean shift clustering and normalized cuts. WNCut involves first defining a color swatch (typically a few pixels) from the object of interest. A multi-scale mean shift coupled normalized cuts algorithm then rapidly yields an initial accurate detection of all objects in the scene corresponding to the colors in the swatch. This detection result provides the initial boundary for GAC model. The edge-detector function of the GAC model employs a local structure tensor based color gradient, obtained by calculating the local min/max variations contributed from each color channel (e.g. R,G,B or H,S,V). Our color gradient based edge-detector function results in more prominent boundaries compared to classical gray scale gradient based function. We evaluate segmentation results of our new WNCut initialized color gradient based GAC (WNCut-CGAC) model against a popular region-based model (Chan

  10. Disorders of Reading and Their Implications for Models of Normal Reading.

    Science.gov (United States)

    Coltheart, Max

    1981-01-01

    Illustrates how one can test a multicomponent model of reading by observing the different forms that acquired reading disorder takes as a consequence of different patterns of damage to the brain. (HOD)

  11. Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.

    Science.gov (United States)

    Tang, Dajun; Hefner, Brian T

    2012-04-01

    Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces.

  12. Analytical model to describe fluorescence spectra of normal and preneoplastic epithelial tissue: comparison with Monte Carlo simulations and clinical measurements.

    Science.gov (United States)

    Chang, Sung K; Arifler, Dizem; Drezek, Rebekah; Follen, Michele; Richards-Kortum, Rebecca

    2004-01-01

    Fluorescence spectroscopy has shown promise for the detection of precancerous changes in vivo. The epithelial and stromal layers of tissue have very different optical properties; the albedo is relatively low in the epithelium and approaches one in the stroma. As precancer develops, the optical properties of the epithelium and stroma are altered in markedly different ways: epithelial scattering and fluorescence increase, and stromal scattering and fluorescence decrease. We present an analytical model of the fluorescence spectrum of a two-layer medium such as epithelial tissue. Our hypothesis is that accounting for the two different tissue layers will provide increased diagnostic information when used to analyze tissue fluorescence spectra measured in vivo. The Beer-Lambert law is used to describe light propagation in the epithelial layer, while light propagation in the highly scattering stromal layer is described with diffusion theory. Predictions of the analytical model are compared to results from Monte Carlo simulations of light propagation under a range of optical properties reported for normal and precancerous epithelial tissue. In all cases, the mean square error between the Monte Carlo simulations and the analytical model are within 15%. Finally, model predictions are compared to fluorescence spectra of normal and precancerous cervical tissue measured in vivo; the lineshape of fluorescence agrees well in both cases, and the decrease in fluorescence intensity from normal to precancerous tissue is correctly predicted to within 5%. Future work will explore the use of this model to extract information about changes in epithelial and stromal optical properties from clinical measurements and the diagnostic value of these parameters. (c) 2004 Society of Photo-Optical Instrumentation Engineers.

  13. Modelled hydraulic redistribution by sunflower (Helianthus annuus L.) matches observed data only after including night-time transpiration.

    Science.gov (United States)

    Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele

    2014-04-01

    The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR. © 2013 John Wiley & Sons Ltd.

  14. A volume-equivalent spherical necrosis-tumor-normal liver model for estimating absorbed dose in yttrium-90 microsphere therapy.

    Science.gov (United States)

    Wu, Chin-Hui; Liao, Yi-Jen; Lin, Tzung-Yi; Chen, Yu-Cheng; Sun, Shung-Shung; Liu, Yen-Wan Hsueh; Hsu, Shih-Ming

    2016-11-01

    Primary hepatocellular carcinoma and metastatic liver tumors are highly malignant tumors in Asia. The incidence of fatal liver cancer is also increasing in the United States. The aim of this study was to establish a spherical tumor model and determine its accuracy in predicting the absorbed dose in yttrium-90 (Y-90) microsphere therapy for liver cancer. Liver morphology can be approximated by a spherical model comprising three concentric regions representing necrotic, tumor, and normal liver tissues. The volumes of these three regions represent those in the actual liver. A spherical tumor model was proposed to calculate the absorbed fractions in the spherical tumor, necrotic, and normal tissue regions. The THORplan treatment planning system and Monte Carlo N-particle extended codes were used for this spherical tumor model. Using the volume-equivalent method, a spherical tumor model was created to calculate the total absorbed fraction [under different tumor-to-healthy-liver ratios (TLRs)]. The patient-specific model (THORplan) results were used to verify the spherical tumor model results. The results for both the Y-90 spectrum and the Y-90 mean energy indicated that the absorbed fraction was a function of the tumor radius and mass. The absorbed fraction increased with tumor radius. The total absorbed fractions calculated using the spherical tumor model for necrotic, liver tumor, and normal liver tissues were in good agreement with the THORplan results, with differences of less than 3% for TLRs of 2-5. The results for the effect of TLR indicate that for the same tumor configuration, the total absorbed fraction decreased with increasing TLR; for the same shell tumor thickness and TLR, the total absorbed fraction was approximately constant; and for tumors with the same radius, the total fraction absorbed by the tumor increased with the shell thickness. The results from spherical tumor models with different tumor-to-healthy-liver ratios were highly consistent with the

  15. Which percentile-based approach should be preferred for calculating normalized citation impact values? An empirical comparison of five approaches including a newly developed citation-rank approach (P100)

    NARCIS (Netherlands)

    Bornmann, L.; Leydesdorff, L.; Wang, J.

    2013-01-01

    For comparisons of citation impacts across fields and over time, bibliometricians normalize the observed citation counts with reference to an expected citation value. Percentile-based approaches have been proposed as a non-parametric alternative to parametric central-tendency statistics. Percentiles

  16. Including the effects of elastic compressibility and volume changes in geodynamical modeling of crust-lithosphere-mantle deformation

    Science.gov (United States)

    de Monserrat, Albert; Morgan, Jason P.

    2016-04-01

    Materials in Earth's interior are exposed to thermomechanical (e.g. variations in stress/pressure and temperature) and chemical (e.g. phase changes, serpentinization, melting) processes that are associated with volume changes. Most geodynamical codes assume the incompressible Boussinesq approximation, where changes in density due to temperature or phase change effect buoyancy, yet volumetric changes are not allowed, and mass is not locally conserved. Elastic stresses induced by volume changes due to thermal expansion, serpentinization, and melt intrusion should cause 'cold' rocks to brittlely fail at ~1% strain. When failure/yielding is an important rheological feature, we think it plausible that volume-change-linked stresses may have a significant influence on the localization of deformation. Here we discuss a new Lagrangian formulation for "elasto-compressible -visco-plastic" flow. In this formulation, the continuity equation has been generalised from a Boussinesq incompressible formulation to include recoverable, elastic, volumetric deformations linked to the local state of mean compressive stress. This formulation differs from the 'anelastic approximation' used in compressible viscous flow in that pressure- and temperature- dependent volume changes are treated as elastic deformation for a given pressure, temperature, and composition/phase. This leads to a visco-elasto-plastic formulation that can model the effects of thermal stresses, pressure-dependent volume changes, and local phase changes. We use a modified version of the (Miliman-based) FEM code M2TRI to run a set of numerical experiments for benchmarking purposes. Three benchmarks are being used to assess the accuracy of this formulation: (1) model the effects on density of a compressible mantle under the influence of gravity; (2) model the deflection of a visco-elastic beam under the influence of gravity, and its recovery when gravitational loading is artificially removed; (3) Modelling the stresses

  17. Spectral element modelling of seismic wave propagation in visco-elastoplastic media including excess-pore pressure development

    Science.gov (United States)

    Oral, Elif; Gélis, Céline; Bonilla, Luis Fabián; Delavaud, Elise

    2017-12-01

    Numerical modelling of seismic wave propagation, considering soil nonlinearity, has become a major topic in seismic hazard studies when strong shaking is involved under particular soil conditions. Indeed, when strong ground motion propagates in saturated soils, pore pressure is another important parameter to take into account when successive phases of contractive and dilatant soil behaviour are expected. Here, we model 1-D seismic wave propagation in linear and nonlinear media using the spectral element numerical method. The study uses a three-component (3C) nonlinear rheology and includes pore-pressure excess. The 1-D-3C model is used to study the 1987 Superstition Hills earthquake (ML 6.6), which was recorded at the Wildlife Refuge Liquefaction Array, USA. The data of this event present strong soil nonlinearity involving pore-pressure effects. The ground motion is numerically modelled for different assumptions on soil rheology and input motion (1C versus 3C), using the recorded borehole signals as input motion. The computed acceleration-time histories show low-frequency amplification and strong high-frequency damping due to the development of pore pressure in one of the soil layers. Furthermore, the soil is found to be more nonlinear and more dilatant under triaxial loading compared to the classical 1C analysis, and significant differences in surface displacements are observed between the 1C and 3C approaches. This study contributes to identify and understand the dominant phenomena occurring in superficial layers, depending on local soil properties and input motions, conditions relevant for site-specific studies.

  18. Normal Tissue Complication Probability (NTCP) Modelling of Severe Acute Mucositis using a Novel Oral Mucosal Surface Organ at Risk.

    Science.gov (United States)

    Dean, J A; Welsh, L C; Wong, K H; Aleksic, A; Dunne, E; Islam, M R; Patel, A; Patel, P; Petkar, I; Phillips, I; Sham, J; Schick, U; Newbold, K L; Bhide, S A; Harrington, K J; Nutting, C M; Gulliford, S L

    2017-04-01

    A normal tissue complication probability (NTCP) model of severe acute mucositis would be highly useful to guide clinical decision making and inform radiotherapy planning. We aimed to improve upon our previous model by using a novel oral mucosal surface organ at risk (OAR) in place of an oral cavity OAR. Predictive models of severe acute mucositis were generated using radiotherapy dose to the oral cavity OAR or mucosal surface OAR and clinical data. Penalised logistic regression and random forest classification (RFC) models were generated for both OARs and compared. Internal validation was carried out with 100-iteration stratified shuffle split cross-validation, using multiple metrics to assess different aspects of model performance. Associations between treatment covariates and severe mucositis were explored using RFC feature importance. Penalised logistic regression and RFC models using the oral cavity OAR performed at least as well as the models using mucosal surface OAR. Associations between dose metrics and severe mucositis were similar between the mucosal surface and oral cavity models. The volumes of oral cavity or mucosal surface receiving intermediate and high doses were most strongly associated with severe mucositis. The simpler oral cavity OAR should be preferred over the mucosal surface OAR for NTCP modelling of severe mucositis. We recommend minimising the volume of mucosa receiving intermediate and high doses, where possible. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  19. A cognitive-based model of tool use in normal aging.

    Science.gov (United States)

    Lesourd, Mathieu; Baumard, Josselin; Jarry, Christophe; Le Gall, Didier; Osiurak, François

    2017-07-01

    While several cognitive domains have been widely investigated in the field of aging, the age-related effects on tool use are still an open issue and hardly any studies on tool use and aging is available. A significant body of literature has indicated that tool use skills might be supported by at least two different types of knowledge, namely, mechanical knowledge and semantic knowledge. However, neither the contribution of these kinds of knowledge to familiar tool use, nor the effects of aging on mechanical and semantic knowledge have been explored in normal aging. The aim of the present study was to fill this gap. To do so, 98 healthy elderly adults were presented with three tasks: a classical, familiar tool use task, a novel tool use task assessing mechanical knowledge, and a picture matching task assessing semantic knowledge. The results showed that aging has a negative impact on tool use tasks and on knowledge supporting tool use skills. We also found that aging did not impact mechanical and semantic knowledge in the same way, confirming the distinct nature of those forms of knowledge. Finally, our results stressed that mechanical and semantic knowledge are both involved in the ability to use familiar tools.

  20. Black Hispanic and Black non-Hispanic breast cancer survival data analysis with half-normal model application.

    Science.gov (United States)

    Khan, Hafiz Mohammad Rafiqullah; Saxena, Anshul; Vera, Veronica; Abdool-Ghany, Faheema; Gabbidon, Kemesha; Perea, Nancy; Stewart, Tiffanie Shauna-Jeanne; Ramamoorthy, Venkataraghavan

    2014-01-01

    Breast cancer is the second leading cause of cancer death for women in the United States. Differences in survival of breast cancer have been noted among racial and ethnic groups, but the reasons for these disparities remain unclear. This study presents the characteristics and the survival curve of two racial and ethnic groups and evaluates the effects of race on survival times by measuring the lifetime data-based half-normal model. The distributions among racial and ethnic groups are compared using female breast cancer patients from nine states in the country all taken from the National Cancer Institute's Surveillance, Epidemiology, and End RESULTS cancer registry. The main end points observed are: age at diagnosis, survival time in months, and marital status. The right skewed half-normal statistical probability model is used to show the differences in the survival times between black Hispanic (BH) and black non-Hispanic (BNH) female breast cancer patients. The Kaplan-Meier and Cox proportional hazard ratio are used to estimate and compare the relative risk of death in two minority groups, BH and BNH. A probability random sample method was used to select representative samples from BNH and BH female breast cancer patients, who were diagnosed during the years of 1973-2009 in the United States. The sample contained 1,000 BNH and 298 BH female breast cancer patients. The median age at diagnosis was 57.75 years among BNH and 54.11 years among BH. The results of the half-normal model showed that the survival times formed positive skewed models with higher variability in BNH compared with BH. The Kaplan-Meir estimate was used to plot the survival curves for cancer patients; this test was positively skewed. The Kaplan-Meier and Cox proportional hazard ratio for survival analysis showed that BNH had a significantly longer survival time as compared to BH which is consistent with the results of the half-normal model. The findings with the proposed model strategy will assist

  1. Estimation of sinusoidal flow heterogeneity in normal and diseased rat livers from tracer dilution data using a fractal model.

    Science.gov (United States)

    Weiss, Michael; Li, Peng; Roberts, Michael S

    2012-11-01

    Up to now, vascular indicator-dilution curves have been analyzed by numerical integration or by fitting empirical functions to the data. Here, we apply a recently developed mechanistic model with the goal to quantitatively describe flow distribution in the sinusoidal network of normal rat livers and those with high-fat emulsion-induced NASH. Single-pass outflow concentration data of sucrose were obtained from in situ perfused rat livers after impulse injection. The model fitted to the data consists of a continuous mixture of inverse Gaussian densities assuming a normal distribution of regional flow. It accounts for the fractal flow heterogeneity in the organ and has three adjustable parameters with a clear physiological interpretation. The model fitted the data well and revealed that the intrahepatic flow dispersion of 49.6 % in the control group increased significantly to 87.2 % in the NASH group (p < 0.01). In contrast to previously used empirical functions, the present model exhibits a power-law tail (~t(-2.4)), which is a signature of fractal microvascular networks. The approach offers the possibility to determine hepatic blood flow heterogeneity in perfused livers and to evaluate the functional implications. © 2012 John Wiley & Sons Ltd.

  2. Bacteria penetrate the normally impenetrable inner colon mucus layer in both murine colitis models and patients with ulcerative colitis.

    Science.gov (United States)

    Johansson, Malin E V; Gustafsson, Jenny K; Holmén-Larsson, Jessica; Jabbar, Karolina S; Xia, Lijun; Xu, Hua; Ghishan, Fayez K; Carvalho, Frederic A; Gewirtz, Andrew T; Sjövall, Henrik; Hansson, Gunnar C

    2014-02-01

    The inner mucus layer in mouse colon normally separates bacteria from the epithelium. Do humans have a similar inner mucus layer and are defects in this mucus layer a common denominator for spontaneous colitis in mice models and ulcerative colitis (UC)? The colon mucus layer from mice deficient in Muc2 mucin, Core 1 O-glycans, Tlr5, interleukin 10 (IL-10) and Slc9a3 (Nhe3) together with that from dextran sodium sulfate-treated mice was immunostained for Muc2, and bacterial localisation in the mucus was analysed. All murine colitis models revealed bacteria in contact with the epithelium. Additional analysis of the less inflamed IL-10(-/-) mice revealed a thicker mucus layer than wild-type, but the properties were different, as the inner mucus layer could be penetrated both by bacteria in vivo and by fluorescent beads the size of bacteria ex vivo. Clear separation between bacteria or fluorescent beads and the epithelium mediated by the inner mucus layer was also evident in normal human sigmoid colon biopsy samples. In contrast, mucus on colon biopsy specimens from patients with UC with acute inflammation was highly penetrable. Most patients with UC in remission had an impenetrable mucus layer similar to that of controls. Normal human sigmoid colon has an inner mucus layer that is impenetrable to bacteria. The colon mucus in animal models that spontaneously develop colitis and in patients with active UC allows bacteria to penetrate and reach the epithelium. Thus colon mucus properties can be modulated, and this suggests a novel model of UC pathophysiology.

  3. Including cetaceans in multi-species assessment models using strandings data: why, how and what can we do about it?

    Directory of Open Access Journals (Sweden)

    Camilo Saavedra

    2014-07-01

    Full Text Available Single-species models have been commonly used to assess fish stocks in the past. Since these models have relatively simple data requirements, they sometimes provide the only tool available to assess the status of a stock when data are not enough to develop more complex models. However, these models have been criticized for several reasons since they provide reference points independently for each species assessed ignoring their interactions. For example, several studies suggest that even more substantial reductions in fishing mortality may be necessary to ensure MSY is reached when taking into consideration multiespecies interactions. Therefore, and as Pauly et al. (1998 stated, single-species analysis may mislead researchers and managers into neglecting the gear and trophic interactions which ultimately determine stocks long-term yields and ecosystem health. Ecosystem or multispecies models offer a number of advantages over single-species models. As stated in the workshop “Incorporating ecosystem considerations into stock assessments and management advice” (Mace, 2000 two general improvements are: a better appreciation of the fishing on ecosystem structure and function, and a better appreciation of the need to consider de value of marine ecosystems for functions other than harvesting fish. As disadvantages, multispecies models are statistically complex and include trophic relationships requiring more information (e.g. good estimations of biological parameters of each species and generally a full quantification of the diet sometimes available though the analysis of stomach contents. To reduce the number of species and therefore the amount of information needed, Minimum Realistic Models (MRMs represent an intermediate level of complexity, where only the subset of the ecosystem, important for the issue under consideration, is modeled. This approach offers the advantage of allowing a refinement of our estimates and can help answer more targeted

  4. Localized Sympathectomy Reduces Mechanical Hypersensitivity by Restoring Normal Immune Homeostasis in Rat Models of Inflammatory Pain.

    Science.gov (United States)

    Xie, Wenrui; Chen, Sisi; Strong, Judith A; Li, Ai-Ling; Lewkowich, Ian P; Zhang, Jun-Ming

    2016-08-17

    Some forms of chronic pain are maintained or enhanced by activity in the sympathetic nervous system (SNS), but attempts to model this have yielded conflicting findings. The SNS has both pro- and anti-inflammatory effects on immunity, confounding the interpretation of experiments using global sympathectomy methods. We performed a "microsympathectomy" by cutting the ipsilateral gray rami where they entered the spinal nerves near the L4 and L5 DRG. This led to profound sustained reductions in pain behaviors induced by local DRG inflammation (a rat model of low back pain) and by a peripheral paw inflammation model. Effects of microsympathectomy were evident within one day, making it unlikely that blocking sympathetic sprouting in the local DRGs or hindpaw was the sole mechanism. Prior microsympathectomy greatly reduced hyperexcitability of sensory neurons induced by local DRG inflammation observed 4 d later. Microsympathectomy reduced local inflammation and macrophage density in the affected tissues (as indicated by paw swelling and histochemical staining). Cytokine profiling in locally inflamed DRG showed increases in pro-inflammatory Type 1 cytokines and decreases in the Type 2 cytokines present at baseline, changes that were mitigated by microsympathectomy. Microsympathectomy was also effective in reducing established pain behaviors in the local DRG inflammation model. We conclude that the effect of sympathetic fibers in the L4/L5 gray rami in these models is pro-inflammatory. This raises the possibility that therapeutic interventions targeting gray rami might be useful in some chronic inflammatory pain conditions. Sympathetic blockade is used for many pain conditions, but preclinical studies show both pro- and anti-nociceptive effects. The sympathetic nervous system also has both pro- and anti-inflammatory effects on immune tissues and cells. We examined effects of a very localized sympathectomy. By cutting the gray rami to the spinal nerves near the lumbar sensory

  5. Age- and sex-dependent model for estimating radioiodine dose to a normal thyroid

    International Nuclear Information System (INIS)

    Killough, G.G.; Eckerman, K.F.

    1985-01-01

    This paper describes the derivation of an age- and sex-dependent model of radioiodine dosimetry in the thyroid and the application of the model to estimating the thyroid dose for each of 4215 patients who were exposed to 131 I in diagnostic and therapeutic procedures. The model was made to conform to these data requirements by the use of age-specific estimates of the biological half-time of iodine in the thyroid and an age- and sex-dependent representation of the mass of the thyroid. Also, it was assumed that the thyroid burden was maximum 24 hours after administration (the 131 I dose is not critically sensitive to this assumption). The metabolic model is of the form A(t) = K[exp(-μ 1 t) - exp(-μ 2 t)] (μCi), where μ 1 = lambda/sub r/ + lambda/sub i//sup b/ (i = 1, 2), lambda/sub r/ is the radiological decay-rate coefficient, and lambda/sub i//sup b/ are biological removal rate coefficients. The values of lambda/sub i//sup b/ are determined by solving a nonlinear equation that depends on assumptions about the time of maximum uptake and the eventual biological loss rate (through which age dependence enters). The value of K may then be calculated from knowledge of the uptake at a particular time. The dosimetric S-factor (rad/μCi-day) is based on specific absorbed fractions for photons of energy ranging from 0.01 to 4.0 MeV for thyroid masses from 1.29 to 19.6 g; the functional form of the S-factor also involves the thyroid mass explicitly, through which the dependence on age and sex enters. An analysis of sensitivity of the model to uncertainties in the thyroid mass and the biological removal rate for several age groups is reported. The model could prove useful in the dosimetry of very short-lived radioiodines. Tables of age- and sex-dependent coefficients are provided to enable readers to make their own calculations. 12 refs., 5 figs., 4 tabs

  6. A Self-consistent Model of the Coronal Heating and Solar Wind Acceleration Including Compressible and Incompressible Heating Processes

    Science.gov (United States)

    Shoda, Munehito; Yokoyama, Takaaki; Suzuki, Takeru K.

    2018-02-01

    We propose a novel one-dimensional model that includes both shock and turbulence heating and qualify how these processes contribute to heating the corona and driving the solar wind. Compressible MHD simulations allow us to automatically consider shock formation and dissipation, while turbulent dissipation is modeled via a one-point closure based on Alfvén wave turbulence. Numerical simulations were conducted with different photospheric perpendicular correlation lengths {λ }0, which is a critical parameter of Alfvén wave turbulence, and different root-mean-square photospheric transverse-wave amplitudes δ {v}0. For the various {λ }0, we obtain a low-temperature chromosphere, high-temperature corona, and supersonic solar wind. Our analysis shows that turbulence heating is always dominant when {λ }0≲ 1 {Mm}. This result does not mean that we can ignore the compressibility because the analysis indicates that the compressible waves and their associated density fluctuations enhance the Alfvén wave reflection and therefore the turbulence heating. The density fluctuation and the cross-helicity are strongly affected by {λ }0, while the coronal temperature and mass-loss rate depend weakly on {λ }0.

  7. Hydraulic Model for Drinking Water Networks, Including Household Connections; Modelo hidraulico para redes de agua potable con tomas domiciliarias

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero Angulo, Jose Oscar [Universidad Autonoma de Sinaloa (Mexico); Arreguin Cortes, Felipe [Instituto Mexicano de Tecnologia del Agua, Jiutepec, Morelos (Mexico)

    2002-03-01

    This paper presents a hydraulic simulation model for drinking water networks, including elements that are currently not considered household connections, spatially variable flowrate distribution pipelines, and tee secondary network. This model is determined by solving the equations needed for a conventional model following an indirect procedure for the solution of large equations systems. Household connection performance is considered as dependent of water pressure and the way in which users operate the taps of such intakes. This approach allows a better a acquaintance with the drinking water supply networks performance as well as solving problems that demand a more precise hydraulic simulation, such as water quality variations, leaks in networks, and the influence of home water tanks as regulating devices. [Spanish] Se presenta un modelo de simulacion hidraulica para redes de agua potable en el cual se incluyen elementos que no se toman en cuenta actualmente, como las tomas domiciliarias, los tubos de distribucion con gastos espacialmente variado y la red secundaria, resolviendo el numero de ecuaciones que seria necesario plantear en un modelo convencional mediante un procedimiento indirecto para la solucion de grandes sistemas de ecuaciones. En las tomas domiciliarias se considera que su funcionamiento depende de las presiones y la forma en que los usuarios operan las llaves de las mismas. Este planteamiento permite conocer mejor el funcionamiento de las redes de abastecimiento de agua potable y solucionar problemas que requieren de una simulacion hidraulica mas precisa, como el comportamiento de la calidad del agua, las fugas en las redes y la influencia reguladora de los tinacos de las casas.

  8. Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

    OpenAIRE

    Liang, Yuli

    2015-01-01

    This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data....

  9. Age- and sex-dependent model for estimating radioiodine dose to a normal thyroid

    International Nuclear Information System (INIS)

    Killough, G.G.; Eckerman, K.F.

    1986-01-01

    This paper describes the derivation of an age- and sex-dependent model of radioiodine dosimetry in the thyroid and the application of the model to estimating the thyroid dose for each of 4215 patients who were exposed to 131 I in diagnostic and therapeutic procedures. In most cases, the available data consisted of the patient's age at the time of administration, the patient's sex, the quantity of activity administered, the clinically-determined uptake of radioiodine by the thyroid, and the time after administration at which the uptake was determined. The metabolic model is of the form A(t) = K[exp(-μ 1 t) -exp(-μ 2 t)] (μCi), where μ 1 = λ/sub r/ - λ/sub i//sup b/ (i = 1, 2), λ/sub r/ is the radiological decay-rate coefficient, and λ/sub i//sup b/ are biological removal rate coefficients. The values of λ/sub i//sup b/ are determined by solving a nonlinear equation that depends on assumptions about the time or maximum uptake an the eventual biological loss rate (through which age dependence enters). The value of K may then be calculated from knowledge of the uptakes at a particular time. The dosimetric S-factor (rad/μCi-day) is based on specific absorbed fractions for photons of energy ranging from 0.01 to 4.0 MeV for thyroid masses from 1.29 to 19.6 g; the functional form of the S-factor also involves the thyroid mass explicitly, through which the dependence on age and sex enters. An analysis of sensitivity of the model to uncertainties in the thyroid mass and the biological removal rate for several age groups is reported. 12 references, 5 figures, 5 tables

  10. Identification of a Developmental Gene Expression Signature, Including HOX Genes, for the Normal Human Colonic Crypt Stem Cell Niche: Overexpression of the Signature Parallels Stem Cell Overpopulation During Colon Tumorigenesis

    OpenAIRE

    Bhatlekar, Seema; Addya, Sankar; Salunek, Moreh; Orr, Christopher R.; Surrey, Saul; McKenzie, Steven; Fields, Jeremy Z.; Boman, Bruce M.

    2013-01-01

    Our goal was to identify a unique gene expression signature for human colonic stem cells (SCs). Accordingly, we determined the gene expression pattern for a known SC-enriched region—the crypt bottom. Colonic crypts and isolated crypt subsections (top, middle, and bottom) were purified from fresh, normal, human, surgical specimens. We then used an innovative strategy that used two-color microarrays (∼18,500 genes) to compare gene expression in the crypt bottom with expression in the other cryp...

  11. Dilatant normal faulting in jointed cohesive rocks: insights from physical modeling

    Science.gov (United States)

    Kettermann, Michael; von Hagke, Christoph; Urai, Janos

    2016-04-01

    Dilatant faults often form in rocks containing pre-existing joints, but the effects of joints on fault segment linkage and fracture connectivity is not well understood. Studying evolution of dilatancy and influence of fractures on fault development provides insights on geometry of fault zones in brittle rocks and eventually allows for predicting their subsurface appearance. We assess the evolution of dilatant faults in fractured rocks using analogue models with cohesive powder. The upper layer contains pre-formed joint sets, and we vary the angle between joints and a rigid basement fault in our experiments. Analogue models were carried out in a manually driven deformation box (30x28x20 cm) with a 60° dipping pre-defined basement fault and 4.5 cm of displacement. To produce open joints prior to faulting, sheets of paper were mounted in the box to a depth of 5 cm at a spacing of 2.5 cm. Powder was then sieved into the box, embedding the paper almost entirely (column height of 19 cm), and the paper was removed. We tested the influence of different angles between the strike of the basement fault and the joint set (joint fault (JF) angles of 0°, 4°, 8°, 12°, 16°, 20°, and 25°). During deformation we captured structural information by time-lapse photography that allows particle imaging velocimetry analyses (PIV) to detect localized deformation at every increment of displacement. Post-mortem photogrammetry preserves the final 3-dimensional structure of the fault zone. Results show robust structural features in models: damage zone width increases by about 50 % and the number of secondary fractures within this zone by more than 100 % with increasing JF-angle. Interestingly, the map-view area fraction of open gaps increases by only 3%. Secondary joints and fault step-overs are oriented at a high angle to the primary joint orientation. Due to the length of the pre-existing open joints, areas far beyond the fractured regions are connected to the system. In contrast

  12. GOCO05c: A New Combined Gravity Field Model Based on Full Normal Equations and Regionally Varying Weighting

    Science.gov (United States)

    Fecher, T.; Pail, R.; Gruber, T.

    2017-05-01

    GOCO05c is a gravity field model computed as a combined solution of a satellite-only model and a global data set of gravity anomalies. It is resolved up to degree and order 720. It is the first model applying regionally varying weighting. Since this causes strong correlations among all gravity field parameters, the resulting full normal equation system with a size of 2 TB had to be solved rigorously by applying high-performance computing. GOCO05c is the first combined gravity field model independent of EGM2008 that contains GOCE data of the whole mission period. The performance of GOCO05c is externally validated by GNSS-levelling comparisons, orbit tests, and computation of the mean dynamic topography, achieving at least the quality of existing high-resolution models. Results show that the additional GOCE information is highly beneficial in insufficiently observed areas, and that due to the weighting scheme of individual data the spectral and spatial consistency of the model is significantly improved. Due to usage of fill-in data in specific regions, the model cannot be used for physical interpretations in these regions.

  13. Current status of the FASTGRASS/PARAGRASS models for fission product release from LWR fuel during normal and accident conditions

    International Nuclear Information System (INIS)

    Rest, J.; Zawadski, S.A.; Piasecka, M.

    1983-10-01

    The theoretical FASTGRASS model for the prediction of the behavior of the gaseous and volatile fission products in nuclear fuels under normal and transient conditions has undergone substantial improvements. The major improvements have been in the atomistic and bubble diffusive flow models, in the models for the behavior of gas bubbles on grain surfaces, and in the models for the behavior of the volatile fission products iodine and cesium. The thoery has received extensive verification over a wide range of fuel operating conditions, and can be regarded as a state-of-the-art model based on our current level of understanding of fission product behavior. PARAGRASS is an extremely efficient, mechanistic computer code with the capability of modeling steady-state and transient fission-product behavior. The models in PARAGRASS are based on the more detailed ones in FASTGRASS. PARAGRASS updates for the FRAPCON (PNL), FRAP-T (INEL), and SCDAP (INEL) codes have recently been completed and implemented. Results from an extensive FASTGRASS verification are presented and discussed for steady-state and transient conditions. In addition, FASTGRASS predictions for fission product release rate constants are compared with those in NUREG-0772. 21 references, 13 figures

  14. An experimental randomized study of six different ventilatory modes in a piglet model with normal lungs

    DEFF Research Database (Denmark)

    Nielsen, J B; Sjöstrand, U H; Henneberg, S W

    1991-01-01

    ventilatory modes. Also the mean airway pressures were lower with the HFV modes 8-9 cm H2O compared to 11-14 cm H2O for the other modes. The gas distribution was evaluated by N2 wash-out and a modified lung clearance index. All modes showed N2 wash-out according to a two-compartment model. The SV-20P mode had...... the fastest wash-out, but the HFV-60 and HFV-20 ventilatory modes also showed a faster N2 wash-out than the others.(ABSTRACT TRUNCATED AT 250 WORDS)...

  15. Evaluation of samarium-153 and holmium-166-EDTMP in the normal baboon model

    Energy Technology Data Exchange (ETDEWEB)

    Louw, W.K.A.; Dormehl, I.C.; Rensburg, A.J. van; Hugo, N.; Alberts, A.S.; Forsyth, O.E.; Beverley, G.; Sweetlove, M.A.; Marais, J.; Loetter, M.G.; Aswegen, A. van

    1996-11-01

    Bone-seeking radiopharmaceuticals such as ethylenediaminetetramethylene phosphonate (EDTMP) complexes of samarium-153 and holmium-166 are receiving considerable attention for therapeutic treatment of bone metastases. In this study, using the baboon experimental model, multicompartmental analysis revealed that with regard to pharmacokinetics, biodistribution, and skeletal localisation, {sup 166}Ho-EDTMP was significantly inferior to {sup 153}Sm-EDTMP and {sup 99m}Tc-MDP. A more suitable {sup 166}Ho-bone-seeking agent should thus be sought for closer similarity to {sup 153}Sm-EDTMP to exploit fully the therapeutic potential of its shorter half-life and more energetic beta radiation.

  16. Escitalopram and NHT normalized stress-induced anhedonia and molecular neuroadaptations in a mouse model of depression.

    Directory of Open Access Journals (Sweden)

    Or Burstein

    Full Text Available Anhedonia is defined as a diminished ability to obtain pleasure from otherwise positive stimuli. Anxiety and mood disorders have been previously associated with dysregulation of the reward system, with anhedonia as a core element of major depressive disorder (MDD. The aim of the present study was to investigate whether stress-induced anhedonia could be prevented by treatments with escitalopram or novel herbal treatment (NHT in an animal model of depression. Unpredictable chronic mild stress (UCMS was administered for 4 weeks on ICR outbred mice. Following stress exposure, animals were randomly assigned to pharmacological treatment groups (i.e., saline, escitalopram or NHT. Treatments were delivered for 3 weeks. Hedonic tone was examined via ethanol and sucrose preferences. Biological indices pertinent to MDD and anhedonia were assessed: namely, hippocampal brain-derived neurotrophic factor (BDNF and striatal dopamine receptor D2 (Drd2 mRNA expression levels. The results indicate that the UCMS-induced reductions in ethanol or sucrose preferences were normalized by escitalopram or NHT. This implies a resemblance between sucrose and ethanol in their hedonic-eliciting property. On a neurobiological aspect, UCMS-induced reduction in hippocampal BDNF levels was normalized by escitalopram or NHT, while UCMS-induced reduction in striatal Drd2 mRNA levels was normalized solely by NHT. The results accentuate the association of stress and anhedonia, and pinpoint a distinct effect for NHT on striatal Drd2 expression.

  17. Determination of normal values for navicular drop during walking: a new model correcting for foot length and gender

    DEFF Research Database (Denmark)

    Nielsen, Rasmus G; Rathleff, Michael S; Simonsen, Ole H

    2009-01-01

    participants. Normal values have not yet been established as foot length, age, gender, and Body Mass Index (BMI) may influence the navicular drop. The purpose of the study was to investigate the influence of foot length, age, gender, and BMI on the navicular drop during walking. METHODS: Navicular drop...... increased by 0.40 mm for males and 0.31 mm for females. Linear models were created to calculate the navicular drop relative to foot length. CONCLUSION: The study demonstrated that the dynamic navicular drop is influenced by foot length and gender. Lack of adjustment for these factors may explain, at least...

  18. 75 FR 70090 - Special Conditions: Bombardier Inc. Model CL-600-2E25 Airplane, Operation Without Normal...

    Science.gov (United States)

    2010-11-17

    ... Airplane, Operation Without Normal Electrical Power AGENCY: Federal Aviation Administration (FAA), DOT... without normal electrical power,'' requires safe operation in VFR conditions for at least five minutes with inoperative normal electrical power. The applicable airworthiness regulations do not contain...

  19. Dynamic changes in sleep pattern during post-partum in normal pregnancy in rat model.

    Science.gov (United States)

    Sivadas, Neelima; Radhakrishnan, Arathi; Aswathy, B S; Kumar, Velayudhan Mohan; Gulia, Kamalesh K

    2017-03-01

    To develop an animal model for studies on peri-partum sleep disorders, sleep patterns in female Wistar rats during pregnancy, post-partum and after weaning, were assessed and associated adaptive changes in their anxiety were examined. Adult nulliparous female rats, maintained in standard laboratory conditions with ad libitum food and water, were surgically implanted with electroencephalogram and electromyogram electrodes under anaesthesia for objective assessment of sleep-wakefulness (S-W). After post-surgical recovery, three control recordings of S-W were taken for 24h before the animals were kept for mating. After confirmation of pregnancy, S-W recordings were acquired during different days of pregnancy, post-partum lactation/nursing days, and also after weaning. Their anxiety levels were tested in the elevated plus maze. During pregnancy, sleep increased primarily due to increase in light non-REM sleep during dark period. There was an increase in non-REM sleep delta power after parturition, though the sleep was fragmented, especially during daytime. Simultaneous behavioural recording showed increased anxiety during third trimester of pregnancy and gradual reversal of it after parturition. This is the first report where diurnal and nocturnal variations in S-W and delta power, along with adaptive changes in anxiety, were studied before, during and after pregnancy. This study also provides an animal model for drug trials and studies on sleep disorders during peri-partum window. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Dynamic experiments with high bisphenol-A concentrations modelled with an ASM model extended to include a separate XOC degrading microorganism

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Press-Kristensen, Kåre; Vanrolleghem, P.A.

    2009-01-01

    with the endocrine disrupting XOC bisphenol-A (BPA) in an activated sludge process with real wastewater were used to hypothesize an ASM-based process model including aerobic growth of a specific BPA-degrading microorganism and sorption of BPA to sludge. A parameter estimation method was developed, which...

  1. Modelling the residual stresses and microstructural evolution in Friction Stir Welding of AA2024-T3 including the Wagner-Kampmann precipitation model

    DEFF Research Database (Denmark)

    Sonne, Mads Rostgaard; Hattel, Jesper Henri

    In this work, a numerical finite element model for friction stir welding of 2024-T3 aluminum alloy, consisting of a heat transfer analysis and a sequentially coupled quasi-static stress analysis is proposed. Metallurgical softening of the material is properly considered and included...

  2. Hippocampal proteomics defines pathways associated with memory decline and resilience in normal aging and Alzheimer's disease mouse models.

    Science.gov (United States)

    Neuner, Sarah M; Wilmott, Lynda A; Hoffmann, Brian R; Mozhui, Khyobeni; Kaczorowski, Catherine C

    2017-03-30

    Alzheimer's disease (AD), the most common form of dementia in the elderly, has no cure. Thus, the identification of key molecular mediators of cognitive decline in AD remains a top priority. As aging is the most significant risk factor for AD, the goal of this study was to identify altered proteins and pathways associated with the development of normal aging and AD memory deficits, and identify unique proteins and pathways that may contribute to AD-specific symptoms. We used contextual fear conditioning to diagnose 8-month-old 5XFAD and non-transgenic (Ntg) mice as having either intact or impaired memory, followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to quantify hippocampal membrane proteins across groups. Subsequent analysis detected 113 proteins differentially expressed relative to memory status (intact vs impaired) in Ntg mice and 103 proteins in 5XFAD mice. Thirty-six proteins, including several involved in neuronal excitability and synaptic plasticity (e.g., GRIA1, GRM3, and SYN1), were altered in both normal aging and AD. Pathway analysis highlighted HDAC4 as a regulator of observed protein changes in both genotypes and identified the REST epigenetic regulatory pathway and G i intracellular signaling as AD-specific pathways involved in regulating the onset of memory deficits. Comparing the hippocampal membrane proteome of Ntg versus AD, regardless of cognitive status, identified 138 differentially expressed proteins, including confirmatory proteins APOE and CLU. Overall, we provide a novel list of putative targets and pathways with therapeutic potential, including a set of proteins associated with cognitive status in normal aging mice or gene mutations that cause AD. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    Science.gov (United States)

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  4. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Science.gov (United States)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  5. INCLUDING RISK IN ECONOMIC FEASIBILITY ANALYSIS:A STOCHASTIC SIMULATION MODEL FOR BLUEBERRY INVESTMENT DECISIONS IN CHILE

    Directory of Open Access Journals (Sweden)

    GERMÁN LOBOS

    2015-12-01

    Full Text Available ABSTRACT The traditional method of net present value (NPV to analyze the economic profitability of an investment (based on a deterministic approach does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L. production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in

  6. A new simulation model estimates micronutrient levels to include in fortified blended foods used in food aid programs.

    Science.gov (United States)

    Fleige, Lisa E; Sahyoun, Nadine R; Murphy, Suzanne P

    2010-02-01

    Current micronutrient levels in Public Law 480 fortified blended foods (FBF) may not be appropriate for all food aid beneficiaries, particularly infants and/or young children and pregnant and/or lactating women. A simulation model was developed to determine the micronutrient fortification levels to include in FBF for food aid programs with the goal of reducing the risk of inadequate micronutrient intakes without exceeding the tolerable upper intake level (UL) for any recipient group. For each micronutrient, the age and gender group with the highest daily Recommended Nutrient Intake (RNI) relative to energy requirement was identified and the effect of providing different percentages of that RNI (66, 75, and 100%) was simulated. In this modeling exercise, we also examined consumption of the FBF at 25 (the usual level), 50, and 100% of daily energy requirement. Results indicated that 2 FBF products are needed: a complementary food for age 6-36 mo and a supplementary food for the older groups. Both of the FBF could be fortified to supply at least 75% of the RNI to all groups, without exceeding the UL for most nutrients, if consumed at 25% of the energy requirement. Even if consumed at 50% of energy requirements, mean intakes of most micronutrients would not exceed the UL, although at 100% of the energy requirement, several micronutrients were undesirably high. We conclude that fortifying an FBF to provide 75% of the RNI would be appropriate for most micronutrients, but this level of fortification would not be appropriate for long-term consumption of the FBF at 100% of the energy requirements.

  7. Model of spur processes in aqueous radiation chemistry including spur overlap and a novel initial hydrated electron distribution

    International Nuclear Information System (INIS)

    Short, D.R.

    1980-01-01

    Results are presented from computer calculations based upon an improved diffusion-kinetic model of the spur which includes a novel initial distribution for the hydrated electron and an approximate mathematical treatment of the overlap of spurs in three dimensions. Experimental data for the decay of the hydrated electron and hydroxyl radical before one in electron-pulse-irradated, solute-free and air-free water are fit wihtin experimental uncertainty by adjustment of the initial spatial distributions of spur intermediates and the average energy deposited in the spur. Using the same values of these parameters, the hydrated electron decay is computed for times from 1 ps 10 μs after the radiatio pulse. The results of such calcuations for various conditions of pulse dose and concentrations of scavengers of individual primary chemical species in the spur are compared with corresponding experimental data obtained predominantly from water and aqueous solutions irradiated with 10 to 15 MeV electron pulses. Very good agreement between calculated and experimental hydrated electron decay in pure water is observed for the entire time range studied when a pulse dose of approximately 7900 rads is modeled, but the calcuated and experimental curves are observed to deviate for times greater than 10 ns nanoseconds when low pulse doses and low scavenger concentrations are considered. It is shown that this deviation is experimental and calculated hydrated electron decay cannot be explained by assuming the presence of a hydrated electron scavenging impurity nor by employing a distribution of nearest neighbor interspur distances to refine the overlap approximation

  8. Modeling of FMISO [F18] nanoparticle PET tracer in normal-cancerous tissue based on real clinical image.

    Science.gov (United States)

    Asgari, Hanie; Soltani, M; Sefidgar, Mostafa

    2018-02-03

    Hypoxia as one of the principal properties of tumor cells is a reaction to the deprivation of oxygen. The location of tumor cells could be identified by assessment of oxygen and nutrient level in human body. Positron emission tomography (PET) is a well-known non-invasive method that is able to measure hypoxia based on the FMISO (Fluoromisonidazole) tracer dynamic. This paper aims to study the PET tracer concentration through convection-diffusion-reaction equations in a real human capillary-like network. A non-uniform oxygen pressure along the capillary path and convection mechanism for FMISO transport are taken into account to accurately model the characteristics of the tracer. To this end, a multi-scale model consists of laminar blood flow through the capillary network, interstitial pressure, oxygen pressure, FMISO diffusion and FMISO convection transport in the extravascular region is developed. The present model considers both normal and tumor tissue regions in computational domain. The accuracy of numerical model is verified with the experimental results available in the literature. The convection and diffusion types of transport mechanism are employed in order to calculate the concentration of FMISO in the normal and tumor sub-domain. The influences of intravascular oxygen pressure, FMISO transport mechanisms, capillary density and different types of tissue on the FMISO concentration have been investigated. According to result (Table 4) the convection mechanism of FMISO molecules transportation is negligible, but it causes more accuracy of the proposed model. The approach of present study can be employed in order to investigate the effects of various parameters, such as tumor shape, on the dynamic behavior of different PET tracers, such as FDG, can be extended to different case study problems, such as drug delivery. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Cone photoreceptors develop normally in the absence of functional rod photoreceptors in a transgenic swine model of retinitis pigmentosa.

    Science.gov (United States)

    Fernandez de Castro, Juan P; Scott, Patrick A; Fransen, James W; Demas, James; DeMarco, Paul J; Kaplan, Henry J; McCall, Maureen A

    2014-04-17

    Human and swine retinas have morphological and functional similarities. In the absence of primate models, the swine is an attractive model to study retinal function and disease, with its cone-rich visual streak, our ability to manipulate their genome, and the differences in susceptibility of rod and cone photoreceptors to disease. We characterized the normal development of cone function and its subsequent decline in a P23H rhodopsin transgenic (TgP23H) miniswine model of autosomal dominant RP. Semen from TgP23H miniswine 53-1 inseminated domestic swine and produced TgP23H and Wt hybrid littermates. Retinal function was evaluated using ERGs between postnatal days (P) 14 and 120. Retinal ganglion cell (RGC) responses were recorded to full-field stimuli at several intensities. Retinal morphology was assessed using light and electron microscopy. Scotopic retinal function matures in Wt pigs up to P60, but never develops in TgP23H pigs. Wt and TgP23H photopic vision matures similarly up to P30 and diverges at P60 where TgP23H cone vision declines. There are fewer TgP23H RGCs with visually evoked responses at all ages and their response to light is compromised. Photoreceptor morphological changes mirror these functional changes. Lack of early scotopic function in TgP23H swine suggests it as a model of an aggressive form of RP. In this mammalian model of RP, normal cone function develops independent of rod function. Therefore, its retina represents a system in which therapies to rescue cones can be developed to prolong photopic visual function in RP patients.

  10. Quantifying the Earthquake Clustering that Independent Sources with Stationary Rates (as Included in Current Risk Models) Can Produce.

    Science.gov (United States)

    Fitzenz, D. D.; Nyst, M.; Apel, E. V.; Muir-Wood, R.

    2014-12-01

    The recent Canterbury earthquake sequence (CES) renewed public and academic awareness concerning the clustered nature of seismicity. Multiple event occurrence in short time and space intervals is reminiscent of aftershock sequences, but aftershock is a statistical definition, not a label one can give an earthquake in real-time. Aftershocks are defined collectively as what creates the Omori event rate decay after a large event or are defined as what is taken away as "dependent events" using a declustering method. It is noteworthy that depending on the declustering method used on the Canterbury earthquake sequence, the number of independent events varies a lot. This lack of unambiguous definition of aftershocks leads to the need to investigate the amount of clustering inherent in "declustered" risk models. This is the task we concentrate on in this contribution. We start from a background source model for the Canterbury region, in which 1) centroids of events of given magnitude are distributed using a latin-hypercube lattice, 2) following the range of preferential orientations determined from stress maps and focal mechanism, 3) with length determined using the local scaling relationship and 4) rates from a and b values derived from the declustered pre-2010 catalog. We then proceed to create tens of thousands of realizations of 6 to 20 year periods, and we define criteria to identify which successions of events in the region would be perceived as a sequence. Note that the spatial clustering expected is a lower end compared to a fully uniform distribution of events. Then we perform the same exercise with rates and b-values determined from the catalog including the CES. If the pre-2010 catalog was long (or rich) enough, then the computed "stationary" rates calculated from it would include the CES declustered events (by construction, regardless of the physical meaning of or relationship between those events). In regions of low seismicity rate (e.g., Canterbury before

  11. A comparison of foot kinematics in people with normal- and flat-arched feet using the Oxford Foot Model.

    Science.gov (United States)

    Levinger, Pazit; Murley, George S; Barton, Christian J; Cotchett, Matthew P; McSweeney, Simone R; Menz, Hylton B

    2010-10-01

    Foot posture is thought to influence predisposition to overuse injuries of the lower limb. Although the mechanisms underlying this proposed relationship are unclear, it is thought that altered foot kinematics may play a role. Therefore, this study was designed to investigate differences in foot motion between people with normal- and flat-arched feet using the Oxford Foot Model (OFM). Foot posture in 19 participants was documented as normal-arched (n=10) or flat-arched (n=9) using a foot screening protocol incorporating measurements from weightbearing antero-posterior and lateral foot radiographs. Differences between the groups in triplanar motion of the tibia, rearfoot and forefoot during walking were evaluated using a three-dimensional motion analysis system incorporating a multi-segment foot model (OFM). Participants with flat-arched feet demonstrated greater peak forefoot plantar-flexion (-13.7° ± 5.6° vs -6.5° ± 3.7°; p=0.004), forefoot abduction (-12.9° ± 6.9° vs -1.8° ± 6.3°; p=0.002), and rearfoot internal rotation (10.6° ± 7.5° vs -0.2°± 9.9°; p=0.018) compared to those with normal-arched feet. Additionally, participants with flat-arched feet demonstrated decreased peak forefoot adduction (-7.0° ± 9.2° vs 5.6° ± 7.3°; p=0.004) and a trend towards increased rearfoot eversion (-5.8° ± 4.4° vs -2.5° ± 2.6°; p=0.06). These findings support the notion that flat-arched feet have altered motion associated with greater pronation during gait; factors that may increase the risk of overuse injury. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Model-based analysis and control of a network of basal ganglia spiking neurons in the normal and Parkinsonian states

    Science.gov (United States)

    Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.

    2011-08-01

    Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and clos