WorldWideScience

Sample records for model include normal

  1. Optimum parameters in a model for tumour control probability, including interpatient heterogeneity: evaluation of the log-normal distribution

    International Nuclear Information System (INIS)

    Keall, P J; Webb, S

    2007-01-01

    The heterogeneity of human tumour radiation response is well known. Researchers have used the normal distribution to describe interpatient tumour radiosensitivity. However, many natural phenomena show a log-normal distribution. Log-normal distributions are common when mean values are low, variances are large and values cannot be negative. These conditions apply to radiosensitivity. The aim of this work was to evaluate the log-normal distribution to predict clinical tumour control probability (TCP) data and to compare the results with the homogeneous (δ-function with single α-value) and normal distributions. The clinically derived TCP data for four tumour types-melanoma, breast, squamous cell carcinoma and nodes-were used to fit the TCP models. Three forms of interpatient tumour radiosensitivity were considered: the log-normal, normal and δ-function. The free parameters in the models were the radiosensitivity mean, standard deviation and clonogenic cell density. The evaluation metric was the deviance of the maximum likelihood estimation of the fit of the TCP calculated using the predicted parameters to the clinical data. We conclude that (1) the log-normal and normal distributions of interpatient tumour radiosensitivity heterogeneity more closely describe clinical TCP data than a single radiosensitivity value and (2) the log-normal distribution has some theoretical and practical advantages over the normal distribution. Further work is needed to test these models on higher quality clinical outcome datasets

  2. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    International Nuclear Information System (INIS)

    Defraene, Gilles; Van den Bergh, Laura; Al-Mamgani, Abrahim; Haustermans, Karin; Heemsbergen, Wilma; Van den Heuvel, Frank; Lebesque, Joos V.

    2012-01-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011–0.013) clinical factor was “previous abdominal surgery.” As second significant (p = 0.012–0.016) factor, “cardiac history” was included in all three rectal bleeding fits, whereas including “diabetes” was significant (p = 0.039–0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003–0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D 50 . Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions

  3. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Defraene, Gilles, E-mail: gilles.defraene@uzleuven.be [Radiation Oncology Department, University Hospitals Leuven, Leuven (Belgium); Van den Bergh, Laura [Radiation Oncology Department, University Hospitals Leuven, Leuven (Belgium); Al-Mamgani, Abrahim [Department of Radiation Oncology, Erasmus Medical Center - Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Haustermans, Karin [Radiation Oncology Department, University Hospitals Leuven, Leuven (Belgium); Heemsbergen, Wilma [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital, Amsterdam (Netherlands); Van den Heuvel, Frank [Radiation Oncology Department, University Hospitals Leuven, Leuven (Belgium); Lebesque, Joos V. [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital, Amsterdam (Netherlands)

    2012-03-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including the most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints

  4. Modeling and simulation of normal and hemiparetic gait

    Science.gov (United States)

    Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni

    2015-09-01

    Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.

  5. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  6. Trinucleon asymptotic normalization constants including Coulomb effects

    International Nuclear Information System (INIS)

    Friar, J.L.; Gibson, B.F.; Lehman, D.R.; Payne, G.L.

    1982-01-01

    Exact theoretical expressions for calculating the trinucleon S- and D-wave asymptotic normalization constants, with and without Coulomb effects, are presented. Coordinate-space Faddeev-type equations are used to generate the trinucleon wave functions, and integral relations for the asymptotic norms are derived within this framework. The definition of the asymptotic norms in the presence of the Coulomb interaction is emphasized. Numerical calculations are carried out for the s-wave NN interaction models of Malfliet and Tjon and the tensor force model of Reid. Comparison with previously published results is made. The first estimate of Coulomb effects for the D-wave asymptotic norm is given. All theoretical values are carefully compared with experiment and suggestions are made for improving the experimental situation. We find that Coulomb effects increase the 3 He S-wave asymptotic norm by less than 1% relative to that of 3 H, that Coulomb effects decrease the 3 He D-wave asymptotic norm by approximately 8% relative to that of 3 H, and that the distorted-wave Born approximation D-state parameter, D 2 , is only 1% smaller in magnitude for 3 He than for 3 H due to compensating Coulomb effects

  7. Cylindrical shell under impact load including transverse shear and normal stress

    International Nuclear Information System (INIS)

    Shakeri, M.; Eslami, M.R.; Ghassaa, M.; Ohadi, A.R.

    1993-01-01

    The general governing equations of shell of revolution under shock loads are reduced to equations describing the elastic behavior of cylindrical shell under axisymmetric impact load. The effect of lateral normal stress, transverse shear, and rotary inertia are included, and the equations are solved by Galerkin finite element method. The results are compared with the previous works of authors. (author)

  8. Generating a normalized geometric liver model with warping

    International Nuclear Information System (INIS)

    Boes, J.L.; Weymouth, T.E.; Meyer, C.R.; Quint, L.E.; Bland, P.H.; Bookstein, F.L.

    1990-01-01

    This paper reports on the automated determination of the liver surface in abdominal CT scans for radiation treatment, surgery planning, and anatomic visualization. The normalized geometric model of the liver is generated by averaging registered outlines from a set of 15 studies of normal liver. The outlines have been registered with the use of thin-plate spline warping based on a set of five homologous landmarks. Thus, the model consists of an average of the surface and a set of five anatomic landmarks. The accuracy of the model is measured against both the set of studies used in model generation and an alternate set of 15 normal studies with use of, as an error measure, the ratio of nonoverlapping model and study volume to total model volume

  9. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    Science.gov (United States)

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  10. Mathematical models of tumour and normal tissue response

    International Nuclear Information System (INIS)

    Jones, B.; Dale, R.G.; Charing Cross Group of Hospitals, London

    1999-01-01

    The historical application of mathematics in the natural sciences and in radiotherapy is compared. The various forms of mathematical models and their limitations are discussed. The Linear Quadratic (LQ) model can be modified to include (i) radiobiological parameter changes that occur during fractionated radiotherapy, (ii) situations such as focal forms of radiotherapy, (iii) normal tissue responses, and (iv) to allow for the process of optimization. The inclusion of a variable cell loss factor in the LQ model repopulation term produces a more flexible clonogenic doubling time, which can simulate the phenomenon of 'accelerated repopulation'. Differential calculus can be applied to the LQ model after elimination of the fraction number integers. The optimum dose per fraction (maximum cell kill relative to a given normal tissue fractionation sensitivity) is then estimated from the clonogen doubling times and the radiosensitivity parameters (or α/β ratios). Economic treatment optimization is described. Tumour volume studies during or following teletherapy are used to optimize brachytherapy. The radiation responses of both individual tumours and tumour populations (by random sampling 'Monte-Carlo' techniques from statistical ranges of radiobiological and physical parameters) can be estimated. Computerized preclinical trials can be used to guide choice of dose fractionation scheduling in clinical trials. The potential impact of gene and other biological therapies on the results of radical radiotherapy are testable. New and experimentally testable hypotheses are generated from limited clinical data by exploratory modelling exercises. (orig.)

  11. The Semiparametric Normal Variance-Mean Mixture Model

    DEFF Research Database (Denmark)

    Korsholm, Lars

    1997-01-01

    We discuss the normal vairance-mean mixture model from a semi-parametric point of view, i.e. we let the mixing distribution belong to a non parametric family. The main results are consistency of the non parametric maximum likelihood estimat or in this case, and construction of an asymptotically...... normal and efficient estimator....

  12. Normal and Special Models of Neutrino Masses and Mixings

    CERN Document Server

    Altarelli, Guido

    2005-01-01

    One can make a distinction between "normal" and "special" models. For normal models $\\theta_{23}$ is not too close to maximal and $\\theta_{13}$ is not too small, typically a small power of the self-suggesting order parameter $\\sqrt{r}$, with $r=\\Delta m_{sol}^2/\\Delta m_{atm}^2 \\sim 1/35$. Special models are those where some symmetry or dynamical feature assures in a natural way the near vanishing of $\\theta_{13}$ and/or of $\\theta_{23}- \\pi/4$. Normal models are conceptually more economical and much simpler to construct. Here we focus on special models, in particular a recent one based on A4 discrete symmetry and extra dimensions that leads in a natural way to a Harrison-Perkins-Scott mixing matrix.

  13. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  14. New Riemannian Priors on the Univariate Normal Model

    Directory of Open Access Journals (Sweden)

    Salem Said

    2014-07-01

    Full Text Available The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ with hyperparameters θ - ∈ Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - /2γ2, where d2(θ, θ - is the square of Rao’s Riemannian distance. The distributions G( θ - , γ are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ, as shown in the paper, and its dispersion away from θ - is given by γ.  Therefore, one thinks of members of the class represented by G( θ - , γ as being centered around θ - and  lying within a typical  distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that  this  leads  to  an  improvement  in  performance  over  the  use  of  conjugate  priors.

  15. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  16. Normality of raw data in general linear models: The most widespread myth in statistics

    Science.gov (United States)

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  17. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  18. A Box-Cox normal model for response times.

    Science.gov (United States)

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  19. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  20. Analysis of shallow water experimental acoustic data including normal mode model comparisons

    NARCIS (Netherlands)

    McHugh, R.; Simons, D.G.

    2000-01-01

    Ss part of a propagation model validation exercise experimental acoustic and oceanographic data was collected from a shallow-water, long-range channel, off the west coast of Scotland. Temporal variability effects in this channel were assessed through visual inspection of stacked plots, each of which

  1. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  2. The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J

    2018-03-23

    In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.

  3. Advection-diffusion model for normal grain growth and the stagnation of normal grain growth in thin films

    International Nuclear Information System (INIS)

    Lou, C.

    2002-01-01

    An advection-diffusion model has been set up to describe normal grain growth. In this model grains are divided into different groups according to their topological classes (number of sides of a grain). Topological transformations are modelled by advective and diffusive flows governed by advective and diffusive coefficients respectively, which are assumed to be proportional to topological classes. The ordinary differential equations governing self-similar time-independent grain size distribution can be derived analytically from continuity equations. It is proved that the time-independent distributions obtained by solving the ordinary differential equations have the same form as the time-dependent distributions obtained by solving the continuity equations. The advection-diffusion model is extended to describe the stagnation of normal grain growth in thin films. Grain boundary grooving prevents grain boundaries from moving, and the correlation between neighbouring grains accelerates the stagnation of normal grain growth. After introducing grain boundary grooving and the correlation between neighbouring grains into the model, the grain size distribution is close to a lognormal distribution, which is usually found in experiments. A vertex computer simulation of normal grain growth has also been carried out to make a cross comparison with the advection-diffusion model. The result from the simulation did not verify the assumption that the advective and diffusive coefficients are proportional to topological classes. Instead, we have observed that topological transformations usually occur on certain topological classes. This suggests that the advection-diffusion model can be improved by making a more realistic assumption on topological transformations. (author)

  4. Modeling of the Direct Current Generator Including the Magnetic Saturation and Temperature Effects

    Directory of Open Access Journals (Sweden)

    Alfonso J. Mercado-Samur

    2013-11-01

    Full Text Available In this paper the inclusion of temperature effect on the field resistance on the direct current generator model DC1A, which is valid to stability studies is proposed. First, the linear generator model is presented, after the effect of magnetic saturation and the change in the resistance value due to temperature produced by the field current are included. The comparison of experimental results and model simulations to validate the model is used. A direct current generator model which is a better representation of the generator is obtained. Visual comparison between simulations and experimental results shows the success of the proposed model, because it presents the lowest error of the compared models. The accuracy of the proposed model is observed via Modified Normalized Sum of Squared Errors index equal to 3.8979%.

  5. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    Science.gov (United States)

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  6. A Box-Cox normal model for response times

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box–Cox transformations for response

  7. Modified Normal Demand Distributions in (R,S)-Inventory Models

    NARCIS (Netherlands)

    Strijbosch, L.W.G.; Moors, J.J.A.

    2003-01-01

    To model demand, the normal distribution is by far the most popular; the disadvantage that it takes negative values is taken for granted.This paper proposes two modi.cations of the normal distribution, both taking non-negative values only.Safety factors and order-up-to-levels for the familiar (R,

  8. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  9. Notes on power of normality tests of error terms in regression models

    International Nuclear Information System (INIS)

    Střelec, Luboš

    2015-01-01

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models

  10. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  11. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    International Nuclear Information System (INIS)

    Baidillah, Marlin R; Takei, Masahiro

    2017-01-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution. (paper)

  12. Local stem cell depletion model for normal tissue damage

    International Nuclear Information System (INIS)

    Yaes, R.J.; Keland, A.

    1987-01-01

    The hypothesis that radiation causes normal tissue damage by completely depleting local regions of tissue of viable stem cells leads to a simple mathematical model for such damage. In organs like skin and spinal cord where destruction of a small volume of tissue leads to a clinically apparent complication, the complication probability is expressed as a function of dose, volume and stem cell number by a simple triple negative exponential function analogous to the double exponential function of Munro and Gilbert for tumor control. The steep dose response curves for radiation myelitis that are obtained with our model are compared with the experimental data for radiation myelitis in laboratory rats. The model can be generalized to include other types or organs, high LET radiation, fractionated courses of radiation, and cases where an organ with a heterogeneous stem cell population receives an inhomogeneous dose of radiation. In principle it would thus be possible to determine the probability of tumor control and of damage to any organ within the radiation field if the dose distribution in three dimensional space within a patient is known

  13. Pseudo SU(3) shell model: Normal parity bands in odd-mass nuclei

    International Nuclear Information System (INIS)

    Vargas, C.E.; Hirsch, J.G.; Draayer, J.P.

    2000-01-01

    A pseudo shell SU(3) model description of normal parity bands in 159 Tb is presented. The Hamiltonian includes spherical Nilsson single-particle energies, the quadrupole-quadrupole and pairing interactions, as well as three rotor terms. A systematic parametrization is introduced, accompanied by a detailed discussion of the effect each term in the Hamiltonian has on the energy spectrum. Yrast and excited band wavefunctions are analyzed together with their B(E2) values

  14. Food addiction spectrum: a theoretical model from normality to eating and overeating disorders.

    Science.gov (United States)

    Piccinni, Armando; Marazziti, Donatella; Vanelli, Federica; Franceschini, Caterina; Baroni, Stefano; Costanzo, Davide; Cremone, Ivan Mirko; Veltri, Antonello; Dell'Osso, Liliana

    2015-01-01

    The authors comment on the recently proposed food addiction spectrum that represents a theoretical model to understand the continuum between several conditions ranging from normality to pathological states, including eating disorders and obesity, as well as why some individuals show a peculiar attachment to food that can become an addiction. Further, they review the possible neurobiological underpinnings of these conditions that include dopaminergic neurotransmission and circuits that have long been implicated in drug addiction. The aim of this article is also that at stimulating a debate regarding the possible model of a food (or eating) addiction spectrum that may be helpful towards the search of novel therapeutic approaches to different pathological states related to disturbed feeding or overeating.

  15. Numerical modeling of normal turbulent plane jet impingement on solid wall

    Energy Technology Data Exchange (ETDEWEB)

    Guo, C.Y.; Maxwell, W.H.C.

    1984-10-01

    Attention is given to a numerical turbulence model for the impingement of a well developed normal plane jet on a solid wall, by means of which it is possible to express different jet impingement geometries in terms of different boundary conditions. Examples of these jets include those issuing from VTOL aircraft, chemical combustors, etc. The two-equation, turbulent kinetic energy-turbulent dissipation rate model is combined with the continuity equation and the transport equation of vorticity, using an iterative finite difference technique in the computations. Peak levels of turbulent kinetic energy occur not only in the impingement zone, but also in the intermingling zone between the edges of the free jet and the wall jet. 20 references.

  16. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Normal Brain-Skull Development with Hybrid Deformable VR Models Simulation.

    Science.gov (United States)

    Jin, Jing; De Ribaupierre, Sandrine; Eagleson, Roy

    2016-01-01

    This paper describes a simulation framework for a clinical application involving skull-brain co-development in infants, leading to a platform for craniosynostosis modeling. Craniosynostosis occurs when one or more sutures are fused early in life, resulting in an abnormal skull shape. Surgery is required to reopen the suture and reduce intracranial pressure, but is difficult without any predictive model to assist surgical planning. We aim to study normal brain-skull growth by computer simulation, which requires a head model and appropriate mathematical methods for brain and skull growth respectively. On the basis of our previous model, we further specified suture model into fibrous and cartilaginous sutures and develop algorithm for skull extension. We evaluate the resulting simulation by comparison with datasets of cases and normal growth.

  18. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  19. Bayesian Option Pricing Using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars Peter

    While stochastic volatility models improve on the option pricing error when compared to the Black-Scholes-Merton model, mispricings remain. This paper uses mixed normal heteroskedasticity models to price options. Our model allows for significant negative skewness and time varying higher order...... moments of the risk neutral distribution. Parameter inference using Gibbs sampling is explained and we detail how to compute risk neutral predictive densities taking into account parameter uncertainty. When forecasting out-of-sample options on the S&P 500 index, substantial improvements are found compared...

  20. Normal Mode Derived Models of the Physical Properties of Earth's Outer Core

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.; Wu, W.

    2017-12-01

    Earth's outer core, the largest reservoir of metal in our planet, is comprised of an iron alloy of an uncertain composition. Its dynamical behaviour is responsible for the generation of Earth's magnetic field, with convection driven both by thermal and chemical buoyancy fluxes. Existing models of the seismic velocity and density of the outer core exhibit some variation, and there are only a small number of models which aim to represent the outer core's density.It is therefore important that we develop a better understanding of the physical properties of the outer core. Though most of the outer core is likely to be well mixed, it is possible that the uppermost outer core is stably stratified: it may be enriched in light elements released during the growth of the solid, iron enriched, inner core; by elements dissolved from the mantle into the outer core; or by exsolution of compounds previously dissolved in the liquid metal which will eventually be swept into the mantle. The stratified layer may host MAC or Rossby waves and it could impede communication between the chemically differentiated mantle and outer core, including screening out some of the geodynamo's signal. We use normal mode center frequencies to estimate the physical properties of the outer core in a Bayesian framework. We estimate the mineral physical parameters needed to best produce velocity and density models of the outer core which are consistent with the normal mode observations. We require that our models satisfy realistic physical constraints. We create models of the outer core with and without a distinct uppermost layer and assess the importance of this region.Our normal mode-derived models are compared with observations of body waves which travel through the outer core. In particular, we consider SmKS waves which are especially sensitive to the uppermost outer core and are therefore an important way to understand the robustness of our models.

  1. Kinetic models of gene expression including non-coding RNAs

    Energy Technology Data Exchange (ETDEWEB)

    Zhdanov, Vladimir P., E-mail: zhdanov@catalysis.r

    2011-03-15

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  2. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  3. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  4. Log-Normal Turbulence Dissipation in Global Ocean Models

    Science.gov (United States)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  5. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    Science.gov (United States)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  6. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Presenting Thin Media Models Affects Women's Choice of Diet or Normal Snacks

    Science.gov (United States)

    Krahe, Barbara; Krause, Christina

    2010-01-01

    Our study explored the influence of thin- versus normal-size media models and of self-reported restrained eating behavior on women's observed snacking behavior. Fifty female undergraduates saw a set of advertisements for beauty products showing either thin or computer-altered normal-size female models, allegedly as part of a study on effective…

  8. Option Pricing with Asymmetric Heteroskedastic Normal Mixture Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars

    This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time...... varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities...

  9. Semi-empirical models for the estimation of clear sky solar global and direct normal irradiances in the tropics

    International Nuclear Information System (INIS)

    Janjai, S.; Sricharoen, K.; Pattarapanitchai, S.

    2011-01-01

    Highlights: → New semi-empirical models for predicting clear sky irradiance were developed. → The proposed models compare favorably with other empirical models. → Performance of proposed models is comparable with that of widely used physical models. → The proposed models have advantage over the physical models in terms of simplicity. -- Abstract: This paper presents semi-empirical models for estimating global and direct normal solar irradiances under clear sky conditions in the tropics. The models are based on a one-year period of clear sky global and direct normal irradiances data collected at three solar radiation monitoring stations in Thailand: Chiang Mai (18.78 o N, 98.98 o E) located in the North of the country, Nakhon Pathom (13.82 o N, 100.04 o E) in the Centre and Songkhla (7.20 o N, 100.60 o E) in the South. The models describe global and direct normal irradiances as functions of the Angstrom turbidity coefficient, the Angstrom wavelength exponent, precipitable water and total column ozone. The data of Angstrom turbidity coefficient, wavelength exponent and precipitable water were obtained from AERONET sunphotometers, and column ozone was retrieved from the OMI/AURA satellite. Model validation was accomplished using data from these three stations for the data periods which were not included in the model formulation. The models were also validated against an independent data set collected at Ubon Ratchathani (15.25 o N, 104.87 o E) in the Northeast. The global and direct normal irradiances calculated from the models and those obtained from measurements are in good agreement, with the root mean square difference (RMSD) of 7.5% for both global and direct normal irradiances. The performance of the models was also compared with that of other models. The performance of the models compared favorably with that of empirical models. Additionally, the accuracy of irradiances predicted from the proposed model are comparable with that obtained from some

  10. Perhitungan Iuran Normal Program Pensiun dengan Asumsi Suku Bunga Mengikuti Model Vasicek

    Directory of Open Access Journals (Sweden)

    I Nyoman Widana

    2017-12-01

    Full Text Available Labor has a very important role for national development. One way to optimize their productivity is to guarantee a certainty to earn income after retirement. Therefore the government and the private sector must have a program that can ensure the sustainability of this financial support. One option is a pension plan. The purpose of this study is to calculate the  normal cost  with the interest rate assumed to follow the Vasicek model and analyze the normal contribution of the pension program participants. Vasicek model is used to match with  the actual conditions. The method used in this research is the Projected Unit Credit Method and the Entry Age Normal method. The data source of this research is lecturers of FMIPA Unud. In addition, secondary data is also used in the form of the interest  rate of Bank Indonesia for the period of January 2006-December 2015. The results of this study indicate that  the older the age of the participants, when starting the pension program, the greater the first year normal cost  and the smaller the benefit which he or she  will get. Then, normal cost with constant interest rate  greater than normal cost with Vasicek interest rate. This occurs because the Vasicek model predicts interest between 4.8879%, up to 6.8384%. While constant interest is only 4.25%.  In addition, using normal cost that proportional to salary, it is found that the older the age of the participants the greater the proportion of the salary for normal cost.

  11. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  13. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    Science.gov (United States)

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  14. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    Science.gov (United States)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  15. Modelling growth curves of Nigerian indigenous normal feather ...

    African Journals Online (AJOL)

    This study was conducted to predict the growth curve parameters using Bayesian Gompertz and logistic models and also to compare the two growth function in describing the body weight changes across age in Nigerian indigenous normal feather chicken. Each chick was wing-tagged at day old and body weights were ...

  16. Normal stresses in semiflexible polymer hydrogels

    Science.gov (United States)

    Vahabi, M.; Vos, Bart E.; de Cagny, Henri C. G.; Bonn, Daniel; Koenderink, Gijsje H.; MacKintosh, F. C.

    2018-03-01

    Biopolymer gels such as fibrin and collagen networks are known to develop tensile axial stress when subject to torsion. This negative normal stress is opposite to the classical Poynting effect observed for most elastic solids including synthetic polymer gels, where torsion provokes a positive normal stress. As shown recently, this anomalous behavior in fibrin gels depends on the open, porous network structure of biopolymer gels, which facilitates interstitial fluid flow during shear and can be described by a phenomenological two-fluid model with viscous coupling between network and solvent. Here we extend this model and develop a microscopic model for the individual diagonal components of the stress tensor that determine the axial response of semiflexible polymer hydrogels. This microscopic model predicts that the magnitude of these stress components depends inversely on the characteristic strain for the onset of nonlinear shear stress, which we confirm experimentally by shear rheometry on fibrin gels. Moreover, our model predicts a transient behavior of the normal stress, which is in excellent agreement with the full time-dependent normal stress we measure.

  17. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  18. Reflectance spectrometry of normal and bruised human skins: experiments and modeling

    International Nuclear Information System (INIS)

    Kim, Oleg; Alber, Mark; McMurdy, John; Lines, Collin; Crawford, Gregory; Duffy, Susan

    2012-01-01

    A stochastic photon transport model in multilayer skin tissue combined with reflectance spectroscopy measurements is used to study normal and bruised skins. The model is shown to provide a very good approximation to both normal and bruised real skin tissues by comparing experimental and simulated reflectance spectra. The sensitivity analysis of the skin reflectance spectrum to variations of skin layer thicknesses, blood oxygenation parameter and concentrations of main chromophores is performed to optimize model parameters. The reflectance spectrum of a developed bruise in a healthy adult is simulated, and the concentrations of bilirubin, blood volume fraction and blood oxygenation parameter are determined for different times as the bruise progresses. It is shown that bilirubin and blood volume fraction reach their peak values at 80 and 55 h after contusion, respectively, and the oxygenation parameter is lower than its normal value during 80 h after contusion occurred. The obtained time correlations of chromophore concentrations in developing contusions are shown to be consistent with previous studies. The developed model uses a detailed seven-layer skin approximation for contusion and allows one to obtain more biologically relevant results than those obtained with previous models using one- to three-layer skin approximations. A combination of modeling with spectroscopy measurements provides a new tool for detailed biomedical studies of human skin tissue and for age determination of contusions. (paper)

  19. MOS modeling hierarchy including radiation effects

    International Nuclear Information System (INIS)

    Alexander, D.R.; Turfler, R.M.

    1975-01-01

    A hierarchy of modeling procedures has been developed for MOS transistors, circuit blocks, and integrated circuits which include the effects of total dose radiation and photocurrent response. The models were developed for use with the SCEPTRE circuit analysis program, but the techniques are suitable for other modern computer aided analysis programs. The modeling hierarchy permits the designer or analyst to select the level of modeling complexity consistent with circuit size, parametric information, and accuracy requirements. Improvements have been made in the implementation of important second order effects in the transistor MOS model, in the definition of MOS building block models, and in the development of composite terminal models for MOS integrated circuits

  20. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  1. From explicit to implicit normal mode initialization of a limited-area model

    Energy Technology Data Exchange (ETDEWEB)

    Bijlsma, S.J.

    2013-02-15

    In this note the implicit normal mode initialization of a limited-area model is discussed from a different point of view. To that end it is shown that the equations describing the explicit normal mode initialization applied to the shallow water equations in differentiated form on the sphere can readily be derived in normal mode space if the model equations are separable, but only in the case of stationary Rossby modes can be transformed into the implicit equations in physical space. This is a consequence of the simple relations between the components of the different modes in that case. In addition a simple eigenvalue problem is given for the frequencies of the gravity waves. (orig.)

  2. Normalization and Implementation of Three Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.

    2016-01-01

    Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  3. Population Synthesis Models for Normal Galaxies with Dusty Disks

    Directory of Open Access Journals (Sweden)

    Kyung-Won Suh

    2003-09-01

    Full Text Available To investigate the SEDs of galaxies considering the dust extinction processes in the galactic disks, we present the population synthesis models for normal galaxies with dusty disks. We use PEGASE (Fioc & Rocca-Volmerange 1997 to model them with standard input parameters for stars and new dust parameters. We find that the model results are strongly dependent on the dust parameters as well as other parameters (e.g. star formation history. We compare the model results with the observations and discuss about the possible explanations. We find that the dust opacity functions derived from studies of asymptotic giant branch stars are useful for modeling a galaxy with a dusty disk.

  4. Computer modeling the boron compound factor in normal brain tissue

    International Nuclear Information System (INIS)

    Gavin, P.R.; Huiskamp, R.; Wheeler, F.J.; Griebenow, M.L.

    1993-01-01

    The macroscopic distribution of borocaptate sodium (Na 2 B 12 H 11 SH or BSH) in normal tissues has been determined and can be accurately predicted from the blood concentration. The compound para-borono-phenylalanine (p-BPA) has also been studied in dogs and normal tissue distribution has been determined. The total physical dose required to reach a biological isoeffect appears to increase directly as the proportion of boron capture dose increases. This effect, together with knowledge of the macrodistribution, led to estimates of the influence of the microdistribution of the BSH compound. This paper reports a computer model that was used to predict the compound factor for BSH and p-BPA and, hence, the equivalent radiation in normal tissues. The compound factor would need to be calculated for other compounds with different distributions. This information is needed to design appropriate normal tissue tolerance studies for different organ systems and/or different boron compounds

  5. Currents, HF Radio-derived, Monterey Bay, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  6. Estimating structural equation models with non-normal variables by using transformations

    NARCIS (Netherlands)

    Montfort, van K.; Mooijaart, A.; Meijerink, F.

    2009-01-01

    We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample

  7. Understanding the implementation of complex interventions in health care: the normalization process model

    Directory of Open Access Journals (Sweden)

    Rogers Anne

    2007-09-01

    Full Text Available Abstract Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration. Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model.

  8. Loss and thermal model for power semiconductors including device rating information

    DEFF Research Database (Denmark)

    Ma, Ke; Bahman, Amir Sajjad; Beczkowski, Szymon

    2014-01-01

    The electrical loading and device rating are both important factors that determine the loss and thermal behaviors of power semiconductor devices. In the existing loss and thermal models, only the electrical loadings are focused and treated as design variables, while the device rating is normally...

  9. Condition monitoring with wind turbine SCADA data using Neuro-Fuzzy normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2012-01-01

    System (ANFIS) models are employed to learn the normal behavior in a training phase, where the component condition can be considered healthy. In the application phase the trained models are applied to predict the target signals, e.g. temperatures, pressures, currents, power output, etc. The behavior......This paper presents the latest research results of a project that focuses on normal behavior models for condition monitoring of wind turbines and their components, via ordinary Supervisory Control And Data Acquisition (SCADA) data. In this machine learning approach Adaptive Neuro-Fuzzy Interference...... of the prediction error is used as an indicator for normal and abnormal behavior, with respect to the learned behavior. The advantage of this approach is that the prediction error is widely decoupled from the typical fluctuations of the SCADA data caused by the different turbine operational modes. To classify...

  10. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization.

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-28

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  11. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-01

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  12. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  13. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    Science.gov (United States)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  14. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  15. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  16. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  17. Normal and Abnormal Scenario Modeling with GoldSim for Radioactive Waste Disposal System

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae

    2010-08-01

    A modeling study and development of a total system performance assessment (TSPA) template program, by which an assessment of safety and performance for the radioactive waste repository with normal and/or abnormal nuclide release cases could be assessed has been carried out by utilizing a commercial development tool program, GoldSim. Scenarios associated with the various FEPs and involved in the performance of the proposed repository in view of nuclide transport and transfer both in the geosphere and biosphere has been also carried out. Selected normal and abnormal scenarios that could alter groundwater flow scheme and then nuclide transport are modeled with the template program. To this end in-depth system models for the normal and abnormal well and earthquake scenarios that are conceptually and rather practically described and then ready for implementing into a GoldSim TSPA template program are introduced with conceptual schemes for each repository system. Illustrative evaluations with data currently available are also shown

  18. Inflammatory Cytokine Tumor Necrosis Factor α Confers Precancerous Phenotype in an Organoid Model of Normal Human Ovarian Surface Epithelial Cells

    Directory of Open Access Journals (Sweden)

    Joseph Kwong

    2009-06-01

    Full Text Available In this study, we established an in vitro organoid model of normal human ovarian surface epithelial (HOSE cells. The spheroids of these normal HOSE cells resembled epithelial inclusion cysts in human ovarian cortex, which are the cells of origin of ovarian epithelial tumor. Because there are strong correlations between chronic inflammation and the incidence of ovarian cancer, we used the organoid model to test whether protumor inflammatory cytokine tumor necrosis factor α would induce malignant phenotype in normal HOSE cells. Prolonged treatment of tumor necrosis factor α induced phenotypic changes of the HOSE spheroids, which exhibited the characteristics of precancerous lesions of ovarian epithelial tumors, including reinitiation of cell proliferation, structural disorganization, epithelial stratification, loss of epithelial polarity, degradation of basement membrane, cell invasion, and overexpression of ovarian cancer markers. The result of this study provides not only an evidence supporting the link between chronic inflammation and ovarian cancer formation but also a relevant and novel in vitro model for studying of early events of ovarian cancer.

  19. Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.

    Science.gov (United States)

    Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D

    2017-05-01

    Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.

  20. Combined use of rapid-prototyping model and surgical guide in correction of mandibular asymmetry malformation patients with normal occlusal relationship.

    Science.gov (United States)

    Xu, Haisong; Zhang, Ce; Shim, Yoong Hoon; Li, Hongliang; Cao, Dejun

    2015-03-01

    The aim of this study is to discuss the application of rapid-prototyping model and surgical guide in the treatment of mandibular asymmetry malformation with normal occlusal relationship. Twenty-four mandibular asymmetry malformation patients with relatively normal occlusal relationship were included in this study. Surgical 3-dimensional rapid-prototyping mandibular models were made for all patients from the computed tomography (CT) DICOM data. The presurgical plan was designed on the model, and the surgical guiders for the osteotomy lines were manufactured. Genioplasty and/or mandibular osteotomy based on the presurgical plan were performed on these patients with the combined use of the rapid-prototyping model and surgical guides. All patients underwent postoperative CT scan and had at least 3-month follow-up. All patients were satisfied with the final results. According to the postoperative CT images and 3-month follow-up, all patients' mandibular asymmetry malformation was significantly improved, and the operation time was distinctly shortened relative to the conventional method. Rapid-prototyping model and surgical guide are viable auxiliary devices for the treatment of mandibular asymmetry malformation with relatively normal occlusal relationship. Combined use of them can make precise preoperative design, improve effects of operation, and shorten operating time.

  1. Simulation of reactive nanolaminates using reduced models: II. Normal propagation

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Maher; Knio, Omar M. [Department of Mechanical Engineering, The Johns Hopkins University, Baltimore, MD 21218-2686 (United States)

    2010-03-15

    Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)

  2. Modelling of tension stiffening for normal and high strength concrete

    DEFF Research Database (Denmark)

    Christiansen, Morten Bo; Nielsen, Mogens Peter

    1998-01-01

    form the model is extended to apply to biaxial stress fields as well. To determine the biaxial stress field, the theorem of minimum complementary elastic energy is used. The theory has been compared with tests on rods, disks, and beams of both normal and high strength concrete, and very good results...

  3. Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)

    Science.gov (United States)

    Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi

    2017-03-01

    The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.

  4. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  5. Interpretation of metabolic memory phenomenon using a physiological systems model: What drives oxidative stress following glucose normalization?

    Science.gov (United States)

    Voronova, Veronika; Zhudenkov, Kirill; Helmlinger, Gabriel; Peskov, Kirill

    2017-01-01

    Hyperglycemia is generally associated with oxidative stress, which plays a key role in diabetes-related complications. A complex, quantitative relationship has been established between glucose levels and oxidative stress, both in vitro and in vivo. For example, oxidative stress is known to persist after glucose normalization, a phenomenon described as metabolic memory. Also, uncontrolled glucose levels appear to be more detrimental to patients with diabetes (non-constant glucose levels) vs. patients with high, constant glucose levels. The objective of the current study was to delineate the mechanisms underlying such behaviors, using a mechanistic physiological systems modeling approach that captures and integrates essential underlying pathophysiological processes. The proposed model was based on a system of ordinary differential equations. It describes the interplay between reactive oxygen species production potential (ROS), ROS-induced cell alterations, and subsequent adaptation mechanisms. Model parameters were calibrated using different sources of experimental information, including ROS production in cell cultures exposed to various concentration profiles of constant and oscillating glucose levels. The model adequately reproduced the ROS excess generation after glucose normalization. Such behavior appeared to be driven by positive feedback regulations between ROS and ROS-induced cell alterations. The further oxidative stress-related detrimental effect as induced by unstable glucose levels can be explained by inability of cells to adapt to dynamic environment. Cell adaptation to instable high glucose declines during glucose normalization phases, and further glucose increase promotes similar or higher oxidative stress. In contrast, gradual ROS production potential decrease, driven by adaptation, is observed in cells exposed to constant high glucose.

  6. Progressive IRP Models for Power Resources Including EPP

    Directory of Open Access Journals (Sweden)

    Yiping Zhu

    2017-01-01

    Full Text Available In the view of optimizing regional power supply and demand, the paper makes effective planning scheduling of supply and demand side resources including energy efficiency power plant (EPP, to achieve the target of benefit, cost, and environmental constraints. In order to highlight the characteristics of different supply and demand resources in economic, environmental, and carbon constraints, three planning models with progressive constraints are constructed. Results of three models by the same example show that the best solutions to different models are different. The planning model including EPP has obvious advantages considering pollutant and carbon emission constraints, which confirms the advantages of low cost and emissions of EPP. The construction of progressive IRP models for power resources considering EPP has a certain reference value for guiding the planning and layout of EPP within other power resources and achieving cost and environmental objectives.

  7. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  8. An investigation of FLUENT's fan model including the effect of swirl velocity

    International Nuclear Information System (INIS)

    El Saheli, A.; Barron, R.M.

    2002-01-01

    The purpose of this paper is to investigate and discuss the reliability of simplified models for the computational fluid dynamics (CFD) simulation of air flow through automotive engine cooling fans. One of the most widely used simplified fan models in industry is a variant of the actuator disk model which is available in most commercial CFD software, such as FLUENT. In this model, the fan is replaced by an infinitely thin surface on which pressure rise across the fan is specified as a polynomial function of normal velocity or flow rate. The advantages of this model are that it is simple, it accurately predicts the pressure rise through the fan and the axial velocity, and it is robust

  9. Normal tissue complication probability modeling of radiation-induced hypothyroidism after head-and-neck radiation therapy.

    Science.gov (United States)

    Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan

    2013-02-01

    To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with α/β = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D(50) estimated from the models was approximately 44 Gy. The implemented normal tissue complication probability models showed a parallel architecture for the

  10. Normal Tissue Complication Probability Modeling of Radiation-Induced Hypothyroidism After Head-and-Neck Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Bakhshandeh, Mohsen [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir [Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Mahdavi, Seied Rabi Mehdi [Department of Medical Physics, Faculty of Medical Sciences, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Nikoofar, Alireza; Vasheghani, Maryam [Department of Radiation Oncology, Hafte-Tir Hospital, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Kazemnejad, Anoshirvan [Department of Biostatistics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran (Iran, Islamic Republic of)

    2013-02-01

    Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented

  11. Blood Vessel Normalization in the Hamster Oral Cancer Model for Experimental Cancer Therapy Studies

    Energy Technology Data Exchange (ETDEWEB)

    Ana J. Molinari; Romina F. Aromando; Maria E. Itoiz; Marcela A. Garabalino; Andrea Monti Hughes; Elisa M. Heber; Emiliano C. C. Pozzi; David W. Nigg; Veronica A. Trivillin; Amanda E. Schwint

    2012-07-01

    Normalization of tumor blood vessels improves drug and oxygen delivery to cancer cells. The aim of this study was to develop a technique to normalize blood vessels in the hamster cheek pouch model of oral cancer. Materials and Methods: Tumor-bearing hamsters were treated with thalidomide and were compared with controls. Results: Twenty eight hours after treatment with thalidomide, the blood vessels of premalignant tissue observable in vivo became narrower and less tortuous than those of controls; Evans Blue Dye extravasation in tumor was significantly reduced (indicating a reduction in aberrant tumor vascular hyperpermeability that compromises blood flow), and tumor blood vessel morphology in histological sections, labeled for Factor VIII, revealed a significant reduction in compressive forces. These findings indicated blood vessel normalization with a window of 48 h. Conclusion: The technique developed herein has rendered the hamster oral cancer model amenable to research, with the potential benefit of vascular normalization in head and neck cancer therapy.

  12. Modeling pore corrosion in normally open gold- plated copper connectors.

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.

  13. Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars

    2014-01-01

    Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...

  14. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  15. Parton recombination model including resonance production. RL-78-040

    International Nuclear Information System (INIS)

    Roberts, R.G.; Hwa, R.C.; Matsuda, S.

    1978-05-01

    Possible effects of resonance production on the meson inclusive distribution in the fragmentation region are investigated in the framework of the parton recombination model. From a detailed study of the data on vector-meson production, a reliable ratio of the vector-to-pseudoscalar rates is determined. Then the influence of the decay of the vector mesons on the pseudoscalar spectrum is examined, and the effect found to be no more than 25% for x > 0.5. The normalization of the non-strange antiquark distributions are still higher than those in a quiescent proton. The agreement between the calculated results and data remain very good. 36 references

  16. Parton recombination model including resonance production. RL-78-040

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. G.; Hwa, R. C.; Matsuda, S.

    1978-05-01

    Possible effects of resonance production on the meson inclusive distribution in the fragmentation region are investigated in the framework of the parton recombination model. From a detailed study of the data on vector-meson production, a reliable ratio of the vector-to-pseudoscalar rates is determined. Then the influence of the decay of the vector mesons on the pseudoscalar spectrum is examined, and the effect found to be no more than 25% for x > 0.5. The normalization of the non-strange antiquark distributions are still higher than those in a quiescent proton. The agreement between the calculated results and data remain very good. 36 references.

  17. Mathematical model and computer code for coated particles performance at normal operating conditions

    International Nuclear Information System (INIS)

    Golubev, I.; Kadarmetov, I.; Makarov, V.

    2002-01-01

    Computer modeling of thermo-mechanical behavior of coated particles during operating both at normal and off-normal conditions has a very significant role particularly on a stage of new reactors development. In Russia a big experience has been accumulated on fabrication and reactor tests of CP and fuel elements with UO 2 kernels. However, this experience cannot be using in full volume for development of a new reactor installation GT-MHR. This is due to very deep burn-up of the fuel based on plutonium oxide (up to 70% fima). Therefore the mathematical modeling of CP thermal-mechanical behavior and failure prediction becomes particularly important. The authors have a clean understanding that serviceability of fuel with high burn-ups are defined not only by thermo-mechanics, but also by structured changes in coating materials, thermodynamics of chemical processes, 'amoeba-effect', formation CO etc. In the report the first steps of development of integrate code for numerical modeling of coated particles behavior and some calculating results concerning the influence of various design parameters on fuel coated particles endurance for GT-MHR normal operating conditions are submitted. A failure model is developed to predict the fraction of TRISO-coated particles. In this model it is assumed that the failure of CP depends not only on probability of SiC-layer fracture but also on the PyC-layers damage. The coated particle is considered as a uniform design. (author)

  18. A direct comparison of popular models of normal memory loss and Alzheimer's disease in samples of African Americans, Mexican Americans, and refugees and immigrants from the former Soviet Union.

    Science.gov (United States)

    Schrauf, Robert W; Iris, Madelyn

    2011-04-01

    To understand how people differentiate normal memory loss from Alzheimer's disease (AD) by investigating cultural models of these conditions. Ethnographic interviews followed by a survey. Cultural consensus analysis was used to test for the presence of group models, derive the "culturally correct" set of beliefs, and compare models of normal memory loss and AD. Chicago, Illinois. One hundred eight individuals from local neighborhoods: African Americans, Mexican Americans, and refugees and immigrants from the former Soviet Union. Participants responded to yes-or-no questions about the nature and causes of normal memory loss and AD and provided information on ethnicity, age, sex, acculturation, and experience with AD. Groups held a common model of AD as a brain-based disease reflecting irreversible cognitive decline. Higher levels of acculturation predicted greater knowledge of AD. Russian speakers favored biological over psychological models of the disease. Groups also held a common model of normal memory loss, including the important belief that "normal" forgetting involves eventual recall of the forgotten material. Popular models of memory loss and AD confirm that patients and clinicians are speaking the same "language" in their discussions of memory loss and AD. Nevertheless, the presence of coherent models of memory loss and AD, and the unequal distribution of that knowledge across groups, suggests that clinicians should include wider circles of patients' families and friends in their consultations. These results frame knowledge as distributed across social groups rather than simply the possession of individual minds. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.

  19. Publicly available models to predict normal boiling point of organic compounds

    International Nuclear Information System (INIS)

    Oprisiu, Ioana; Marcou, Gilles; Horvath, Dragos; Brunel, Damien Bernard; Rivollet, Fabien; Varnek, Alexandre

    2013-01-01

    Quantitative structure–property models to predict the normal boiling point (T b ) of organic compounds were developed using non-linear ASNNs (associative neural networks) as well as multiple linear regression – ISIDA-MLR and SQS (stochastic QSAR sampler). Models were built on a diverse set of 2098 organic compounds with T b varying in the range of 185–491 K. In ISIDA-MLR and ASNN calculations, fragment descriptors were used, whereas fragment, FPTs (fuzzy pharmacophore triplets), and ChemAxon descriptors were employed in SQS models. Prediction quality of the models has been assessed in 5-fold cross validation. Obtained models were implemented in the on-line ISIDA predictor at (http://infochim.u-strasbg.fr/webserv/VSEngine.html)

  20. Application of a Brittle Damage Model to Normal Plate-on-Plate Impact

    National Research Council Canada - National Science Library

    Raftenberg, Martin N

    2005-01-01

    A brittle damage model presented by Grinfeld and Wright of the U.S. Army Research Laboratory was implemented in the LS-DYNA finite element code and applied to the simulation of normal plate-on-plate impact...

  1. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  2. gamma-Hadron family description by quasi-scaling model at normal nuclear composition of primary cosmic rays

    CERN Document Server

    Kalmakhelidze, M; Svanidze, M

    2002-01-01

    Primary Cosmic Rays Nuclear Composition was investigated in energy region 10 sup 1 sup 5 -10 sup 1 sup 6 eV. The study is based on comparison of gamma hadron families observed by Pamir and Pamir-Chacaltaya collaborations with those generated by means of quasi-scaling model MC0 at different nuclear compositions. It was shown that all characteristics of the observed families (including their intensity) are in very good agreement with properties of simulated events MC0 at normal composition and are in disagreement at heavy dominant compositions

  3. Wind turbine condition monitoring based on SCADA data using normal behavior models

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar; Achiche, Sofiane

    2013-01-01

    This paper proposes a system for wind turbine condition monitoring using Adaptive Neuro-Fuzzy Interference Systems (ANFIS). For this purpose: (1) ANFIS normal behavior models for common Supervisory Control And Data Acquisition (SCADA) data are developed in order to detect abnormal behavior...... the applicability of ANFIS models for monitoring wind turbine SCADA signals. The computational time needed for model training is compared to Neural Network (NN) models showing the strength of ANFIS in training speed. (2) For automation of fault diagnosis Fuzzy Interference Systems (FIS) are used to analyze...

  4. Multivariate Normal Tissue Complication Probability Modeling of Heart Valve Dysfunction in Hodgkin Lymphoma Survivors

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2013-01-01

    Purpose: To establish a multivariate normal tissue complication probability (NTCP) model for radiation-induced asymptomatic heart valvular defects (RVD). Methods and Materials: Fifty-six patients treated with sequential chemoradiation therapy for Hodgkin lymphoma (HL) were retrospectively reviewed for RVD events. Clinical information along with whole heart, cardiac chambers, and lung dose distribution parameters was collected, and the correlations to RVD were analyzed by means of Spearman's rank correlation coefficient (Rs). For the selection of the model order and parameters for NTCP modeling, a multivariate logistic regression method using resampling techniques (bootstrapping) was applied. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). Results: When we analyzed the whole heart, a 3-variable NTCP model including the maximum dose, whole heart volume, and lung volume was shown to be the optimal predictive model for RVD (Rs = 0.573, P<.001, AUC = 0.83). When we analyzed the cardiac chambers individually, for the left atrium and for the left ventricle, an NTCP model based on 3 variables including the percentage volume exceeding 30 Gy (V30), cardiac chamber volume, and lung volume was selected as the most predictive model (Rs = 0.539, P<.001, AUC = 0.83; and Rs = 0.557, P<.001, AUC = 0.82, respectively). The NTCP values increase as heart maximum dose or cardiac chambers V30 increase. They also increase with larger volumes of the heart or cardiac chambers and decrease when lung volume is larger. Conclusions: We propose logistic NTCP models for RVD considering not only heart irradiation dose but also the combined effects of lung and heart volumes. Our study establishes the statistical evidence of the indirect effect of lung size on radio-induced heart toxicity

  5. Nitroglycerin provocation in normal subjects is not a useful human migraine model?

    DEFF Research Database (Denmark)

    Tvedskov, J F; Iversen, Helle Klingenberg; Olesen, J

    2010-01-01

    Provoking delayed migraine with nitroglycerin in migraine sufferers is a cumbersome model. Patients are difficult to recruit, migraine comes on late and variably and only 50-80% of patients develop an attack. A model using normal volunteers would be much more useful, but it should be validated...... aspirin 1000 mg, zolmitriptan 5 mg or placebo to normal healthy volunteers. The design was double-blind, placebo-controlled three-way crossover. Our hypothesis was that these drugs would be effective in the treatment of the mild constant headache induced by long-lasting GTN infusion. The headaches did...... experiment suggests that headache caused by direct nitric oxide (NO) action in the continued presence of NO is very resistance to analgesics and to specific acute migraine treatments. This suggests that NO works very deep in the cascade of events associated with vascular headache, whereas tested drugs work...

  6. Options and pitfalls of normal tissues complication probability models

    International Nuclear Information System (INIS)

    Dorr, Wolfgang

    2011-01-01

    Full text: Technological improvements in the physical administration of radiotherapy have led to increasing conformation of the treatment volume (TV) with the planning target volume (PTV) and of the irradiated volume (IV) with the TV. In this process of improvement of the physical quality of radiotherapy, the total volumes of organs at risk exposed to significant doses have significantly decreased, resulting in increased inhomogeneities in the dose distributions within these organs. This has resulted in a need to identify and quantify volume effects in different normal tissues. Today, irradiated volume today must be considered a 6t h 'R' of radiotherapy, in addition to the 5 'Rs' defined by Withers and Steel in the mid/end 1980 s. The current status of knowledge of these volume effects has recently been summarized for many organs and tissues by the QUANTEC (Quantitative Analysis of Normal Tissue Effects in the Clinic) initiative [Int. J. Radiat. Oncol. BioI. Phys. 76 (3) Suppl., 2010]. However, the concept of using dose-volume histogram parameters as a basis for dose constraints, even without applying any models for normal tissue complication probabilities (NTCP), is based on (some) assumptions that are not met in clinical routine treatment planning. First, and most important, dose-volume histogram (DVH) parameters are usually derived from a single, 'snap-shot' CT-scan, without considering physiological (urinary bladder, intestine) or radiation induced (edema, patient weight loss) changes during radiotherapy. Also, individual variations, or different institutional strategies of delineating organs at risk are rarely considered. Moreover, the reduction of the 3-dimentional dose distribution into a '2dimensl' DVH parameter implies that the localization of the dose within an organ is irrelevant-there are ample examples that this assumption is not justified. Routinely used dose constraints also do not take into account that the residual function of an organ may be

  7. Improved Discovery of Molecular Interactions in Genome-Scale Data with Adaptive Model-Based Normalization

    Science.gov (United States)

    Brown, Patrick O.

    2013-01-01

    Background High throughput molecular-interaction studies using immunoprecipitations (IP) or affinity purifications are powerful and widely used in biology research. One of many important applications of this method is to identify the set of RNAs that interact with a particular RNA-binding protein (RBP). Here, the unique statistical challenge presented is to delineate a specific set of RNAs that are enriched in one sample relative to another, typically a specific IP compared to a non-specific control to model background. The choice of normalization procedure critically impacts the number of RNAs that will be identified as interacting with an RBP at a given significance threshold – yet existing normalization methods make assumptions that are often fundamentally inaccurate when applied to IP enrichment data. Methods In this paper, we present a new normalization methodology that is specifically designed for identifying enriched RNA or DNA sequences in an IP. The normalization (called adaptive or AD normalization) uses a basic model of the IP experiment and is not a variant of mean, quantile, or other methodology previously proposed. The approach is evaluated statistically and tested with simulated and empirical data. Results and Conclusions The adaptive (AD) normalization method results in a greatly increased range in the number of enriched RNAs identified, fewer false positives, and overall better concordance with independent biological evidence, for the RBPs we analyzed, compared to median normalization. The approach is also applicable to the study of pairwise RNA, DNA and protein interactions such as the analysis of transcription factors via chromatin immunoprecipitation (ChIP) or any other experiments where samples from two conditions, one of which contains an enriched subset of the other, are studied. PMID:23349766

  8. Modelling of real area of contact between tool and workpiece in metal forming processes including the influence of subsurface deformation

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, Paulo A. F.; Bay, Niels Oluf

    2016-01-01

    New equipment for testing asperity deformation at various normal loads and subsurface elongations is presented. Resulting real contact area ratios increase heavily with increasing subsurface expansion due to lowered yield pressure on the asperities when imposing subsurface normal stress parallel ...... for estimating friction in the numerical modelling of metal forming processes.......New equipment for testing asperity deformation at various normal loads and subsurface elongations is presented. Resulting real contact area ratios increase heavily with increasing subsurface expansion due to lowered yield pressure on the asperities when imposing subsurface normal stress parallel...... to the surface. Finite element modelling supports the presentation and contributes by extrapolation of results to complete the mapping of contact area as function of normal pressure and one-directional subsurface strain parallel to the surface. Improved modelling of the real contact area is the basis...

  9. Deformation associated with continental normal faults

    Science.gov (United States)

    Resor, Phillip G.

    Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

  10. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  11. Sildenafil normalizes bowel transit in preclinical models of constipation.

    Directory of Open Access Journals (Sweden)

    Sarah K Sharman

    Full Text Available Guanylyl cyclase-C (GC-C agonists increase cGMP levels in the intestinal epithelium to promote secretion. This process underlies the utility of exogenous GC-C agonists such as linaclotide for the treatment of chronic idiopathic constipation (CIC and irritable bowel syndrome with constipation (IBS-C. Because GC-C agonists have limited use in pediatric patients, there is a need for alternative cGMP-elevating agents that are effective in the intestine. The present study aimed to determine whether the PDE-5 inhibitor sildenafil has similar effects as linaclotide on preclinical models of constipation. Oral administration of sildenafil caused increased cGMP levels in mouse intestinal epithelium demonstrating that blocking cGMP-breakdown is an alternative approach to increase cGMP in the gut. Both linaclotide and sildenafil reduced proliferation and increased differentiation in colon mucosa, indicating common target pathways. The homeostatic effects of cGMP required gut turnover since maximal effects were observed after 3 days of treatment. Neither linaclotide nor sildenafil treatment affected intestinal transit or water content of fecal pellets in healthy mice. To test the effectiveness of cGMP elevation in a functional motility disorder model, mice were treated with dextran sulfate sodium (DSS to induce colitis and were allowed to recover for several weeks. The recovered animals exhibited slower transit, but increased fecal water content. An acute dose of sildenafil was able to normalize transit and fecal water content in the DSS-recovery animal model, and also in loperamide-induced constipation. The higher fecal water content in the recovered animals was due to a compromised epithelial barrier, which was normalized by sildenafil treatment. Taken together our results show that sildenafil can have similar effects as linaclotide on the intestine, and may have therapeutic benefit to patients with CIC, IBS-C, and post-infectious IBS.

  12. Development and Implementation of Mechanistic Terry Turbine Models in RELAP-7 to Simulate RCIC Normal Operation Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Hongbin [Idaho National Lab. (INL), Idaho Falls, ID (United States); O' Brien, James Edward [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inlet velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input

  13. Modification of the biologic dose to normal tissue by daily fraction

    Energy Technology Data Exchange (ETDEWEB)

    Wollin, M; Kagan, A R [Southern California Permanente Medical Group, Los Angeles Calif. (USA). Dep. of Radiation Therapy

    1976-12-01

    A method to predict normal tissue injury is proposed that includes high daily doses and unusual times successfully by calculating a new value called BIR (Biologic Index of Reaction). BIR and NSD were calculated for various normal tissue reactions. With the aid of statistical correlation techniques it is found that the BIR model is better than the NSD model in predicting radiation myelopathy and vocal edema and as good as NSD IN PREDICTING RIB FRACTURE/ Neither model predicts pericardial effusion. In no case were the results of BIR inferior to those of NSD.

  14. Deconstructing Interocular Suppression: Attention and Divisive Normalization.

    Directory of Open Access Journals (Sweden)

    Hsin-Hung Li

    2015-10-01

    Full Text Available In interocular suppression, a suprathreshold monocular target can be rendered invisible by a salient competitor stimulus presented in the other eye. Despite decades of research on interocular suppression and related phenomena (e.g., binocular rivalry, flash suppression, continuous flash suppression, the neural processing underlying interocular suppression is still unknown. We developed and tested a computational model of interocular suppression. The model included two processes that contributed to the strength of interocular suppression: divisive normalization and attentional modulation. According to the model, the salient competitor induced a stimulus-driven attentional modulation selective for the location and orientation of the competitor, thereby increasing the gain of neural responses to the competitor and reducing the gain of neural responses to the target. Additional suppression was induced by divisive normalization in the model, similar to other forms of visual masking. To test the model, we conducted psychophysics experiments in which both the size and the eye-of-origin of the competitor were manipulated. For small and medium competitors, behavioral performance was consonant with a change in the response gain of neurons that responded to the target. But large competitors induced a contrast-gain change, even when the competitor was split between the two eyes. The model correctly predicted these results and outperformed an alternative model in which the attentional modulation was eye specific. We conclude that both stimulus-driven attention (selective for location and feature and divisive normalization contribute to interocular suppression.

  15. Deconstructing Interocular Suppression: Attention and Divisive Normalization.

    Science.gov (United States)

    Li, Hsin-Hung; Carrasco, Marisa; Heeger, David J

    2015-10-01

    In interocular suppression, a suprathreshold monocular target can be rendered invisible by a salient competitor stimulus presented in the other eye. Despite decades of research on interocular suppression and related phenomena (e.g., binocular rivalry, flash suppression, continuous flash suppression), the neural processing underlying interocular suppression is still unknown. We developed and tested a computational model of interocular suppression. The model included two processes that contributed to the strength of interocular suppression: divisive normalization and attentional modulation. According to the model, the salient competitor induced a stimulus-driven attentional modulation selective for the location and orientation of the competitor, thereby increasing the gain of neural responses to the competitor and reducing the gain of neural responses to the target. Additional suppression was induced by divisive normalization in the model, similar to other forms of visual masking. To test the model, we conducted psychophysics experiments in which both the size and the eye-of-origin of the competitor were manipulated. For small and medium competitors, behavioral performance was consonant with a change in the response gain of neurons that responded to the target. But large competitors induced a contrast-gain change, even when the competitor was split between the two eyes. The model correctly predicted these results and outperformed an alternative model in which the attentional modulation was eye specific. We conclude that both stimulus-driven attention (selective for location and feature) and divisive normalization contribute to interocular suppression.

  16. Hydrodynamic Cucker-Smale model with normalized communication weights and time delay

    KAUST Repository

    Choi, Young-Pil

    2017-07-17

    We study a hydrodynamic Cucker-Smale-type model with time delay in communication and information processing, in which agents interact with each other through normalized communication weights. The model consists of a pressureless Euler system with time delayed non-local alignment forces. We resort to its Lagrangian formulation and prove the existence of its global in time classical solutions. Moreover, we derive a sufficient condition for the asymptotic flocking behavior of the solutions. Finally, we show the presence of a critical phenomenon for the Eulerian system posed in the spatially one-dimensional setting.

  17. Experimental Modeling of VHTR Plenum Flows during Normal Operation and Pressurized Conduction Cooldown

    Energy Technology Data Exchange (ETDEWEB)

    Glenn E McCreery; Keith G Condie

    2006-09-01

    The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. The present document addresses experimental modeling of flow and thermal mixing phenomena of importance during normal or reduced power operation and during a loss of forced reactor cooling (pressurized conduction cooldown) scenario. The objectives of the experiments are, 1), provide benchmark data for assessment and improvement of codes proposed for NGNP designs and safety studies, and, 2), obtain a better understanding of related phenomena, behavior and needs. Physical models of VHTR vessel upper and lower plenums which use various working fluids to scale phenomena of interest are described. The models may be used to both simulate natural convection conditions during pressurized conduction cooldown and turbulent lower plenum flow during normal or reduced power operation.

  18. Stochastic modelling of two-phase flows including phase change

    International Nuclear Information System (INIS)

    Hurisse, O.; Minier, J.P.

    2011-01-01

    Stochastic modelling has already been developed and applied for single-phase flows and incompressible two-phase flows. In this article, we propose an extension of this modelling approach to two-phase flows including phase change (e.g. for steam-water flows). Two aspects are emphasised: a stochastic model accounting for phase transition and a modelling constraint which arises from volume conservation. To illustrate the whole approach, some remarks are eventually proposed for two-fluid models. (authors)

  19. Prediction of consonant recognition in quiet for listeners with normal and impaired hearing using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas

    2014-03-01

    Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.

  20. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  1. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon; Liang, Faming

    2014-01-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo

  2. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  3. Molecular dynamics study of lipid bilayers modeling the plasma membranes of normal murine thymocytes and leukemic GRSL cells.

    Science.gov (United States)

    Andoh, Yoshimichi; Okazaki, Susumu; Ueoka, Ryuichi

    2013-04-01

    Molecular dynamics (MD) calculations for the plasma membranes of normal murine thymocytes and thymus-derived leukemic GRSL cells in water have been performed under physiological isothermal-isobaric conditions (310.15K and 1 atm) to investigate changes in membrane properties induced by canceration. The model membranes used in our calculations for normal and leukemic thymocytes comprised 23 and 25 kinds of lipids, respectively, including phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, lysophospholipids, and cholesterol. The mole fractions of the lipids adopted here were based on previously published experimental values. Our calculations clearly showed that the membrane area was increased in leukemic cells, and that the isothermal area compressibility of the leukemic plasma membranes was double that of normal cells. The calculated membranes of leukemic cells were thus considerably bulkier and softer in the lateral direction compared with those of normal cells. The tilt angle of the cholesterol and the conformation of the phospholipid fatty acid tails both showed a lower level of order in leukemic cell membranes compared with normal cell membranes. The lateral radial distribution function of the lipids also showed a more disordered structure in leukemic cell membranes than in normal cell membranes. These observations all show that, for the present thymocytes, the lateral structure of the membrane is considerably disordered by canceration. Furthermore, the calculated lateral self-diffusion coefficient of the lipid molecules in leukemic cell membranes was almost double that in normal cell membranes. The calculated rotational and wobbling autocorrelation functions also indicated that the molecular motion of the lipids was enhanced in leukemic cell membranes. Thus, here we have demonstrated that the membranes of thymocyte leukemic cells are more disordered and more fluid than normal cell membranes. Copyright © 2013

  4. Normal mode analysis and applications in biological physics.

    Science.gov (United States)

    Dykeman, Eric C; Sankey, Otto F

    2010-10-27

    Normal mode analysis has become a popular and often used theoretical tool in the study of functional motions in enzymes, viruses, and large protein assemblies. The use of normal modes in the study of these motions is often extremely fruitful since many of the functional motions of large proteins can be described using just a few normal modes which are intimately related to the overall structure of the protein. In this review, we present a broad overview of several popular methods used in the study of normal modes in biological physics including continuum elastic theory, the elastic network model, and a new all-atom method, recently developed, which is capable of computing a subset of the low frequency vibrational modes exactly. After a review of the various methods, we present several examples of applications of normal modes in the study of functional motions, with an emphasis on viral capsids.

  5. Statistical Theory of Normal Grain Growth Revisited

    International Nuclear Information System (INIS)

    Gadomski, A.; Luczka, J.

    2002-01-01

    In this paper, we discuss three physically relevant problems concerning the normal grain growth process. These are: Infinite vs finite size of the system under study (a step towards more realistic modeling); conditions of fine-grained structure formation, with possible applications to thin films and biomembranes, and interesting relations to superplasticity of materials; approach to log-normality, an ubiquitous natural phenomenon, frequently reported in literature. It turns out that all three important points mentioned are possible to be included in a Mulheran-Harding type behavior of evolving grains-containing systems that we have studied previously. (author)

  6. American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution

    DEFF Research Database (Denmark)

    Stentoft, Lars Peter

    In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... properties shows that there are important option pricing differences compared to the Gaussian case as well as to the symmetric special case. A large scale empirical examination shows that our model outperforms the Gaussian case for pricing options on three large US stocks as well as a major index...

  7. Modeling Electric Double-Layers Including Chemical Reaction Effects

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    2014-01-01

    A physicochemical and numerical model for the transient formation of an electric double-layer between an electrolyte and a chemically-active flat surface is presented, based on a finite elements integration of the nonlinear Nernst-Planck-Poisson model including chemical reactions. The model works...... for symmetric and asymmetric multi-species electrolytes and is not limited to a range of surface potentials. Numerical simulations are presented, for the case of a CaCO3 electrolyte solution in contact with a surface with rate-controlled protonation/deprotonation reactions. The surface charge and potential...... are determined by the surface reactions, and therefore they depends on the bulk solution composition and concentration...

  8. A model for the analysis of a normal evolution scenarios for a deep geological granite repository for high-level radioactive waste

    International Nuclear Information System (INIS)

    Cormenzana Lopez, J.L.; Cunado, M.A.; Lopez, M.T.

    1996-01-01

    The methodology usually used to evaluate the behaviour of deep geological repositories for high-level radioactive wastes comprises three phases: Identification of factors (processes, characteristics and events) that can affect the repository. Generation of scenarios. In general, a normal evolution scenario (Reference Scenario) and various disruptive scenarios (earthquake, human intrusion, etc) are considered. Evaluation of the behaviour of the repository in each scenario. The normal evolution scenario taking into account all factors with a high probability of occurrence is the first to be analysed. The performance assessment of behaviour being carried out by ENRESA for the AGP Granite has led to the identification of 63 of these factors. To analyse repository behaviour in the normal evolution scenario, it is necessary to first of all create an integrated model of the global system. This is a qualitative model including the 63 factors identified. For a global view of a such a complex system, it is very useful to graphically display the relationship between factors in an Influence Diagram. This paper shows the Influence Diagram used in the analysis of the AGP Granite Reference Scenario. (Author)

  9. MODEL OF THE TOKAMAK EDGE DENSITY PEDESTAL INCLUDING DIFFUSIVE NEUTRALS

    International Nuclear Information System (INIS)

    BURRELL, K.H.

    2003-01-01

    OAK-B135 Several previous analytic models of the tokamak edge density pedestal have been based on diffusive transport of plasma plus free-streaming of neutrals. This latter neutral model includes only the effect of ionization and neglects charge exchange. The present work models the edge density pedestal using diffusive transport for both the plasma and the neutrals. In contrast to the free-streaming model, a diffusion model for the neutrals includes the effect of both charge exchange and ionization and is valid when charge exchange is the dominant interaction. Surprisingly, the functional forms for the electron and neutral density profiles from the present calculation are identical to the results of the previous analytic models. There are some differences in the detailed definition of various parameters in the solution. For experimentally relevant cases where ionization and charge exchange rate are comparable, both models predict approximately the same width for the edge density pedestal

  10. The COBE normalization for standard cold dark matter

    Science.gov (United States)

    Bunn, Emory F.; Scott, Douglas; White, Martin

    1995-01-01

    The Cosmic Background Explorer Satellite (COBE) detection of microwave anisotropies provides the best way of fixing the amplitude of cosmological fluctuations on the largest scales. This normalization is usually given for an n = 1 spectrum, including only the anisotropy caused by the Sachs-Wolfe effect. This is certainly not a good approximation for a model containing any reasonable amount of baryonic matter. In fact, even tilted Sachs-Wolfe spectra are not a good fit to models like cold dark matter (CDM). Here, we normalize standard CDM (sCDM) to the two-year COBE data and quote the best amplitude in terms of the conventionally used measures of power. We also give normalizations for some specific variants of this standard model, and we indicate how the normalization depends on the assumed values on n, Omega(sub B) and H(sub 0). For sCDM we find the mean value of Q = 19.9 +/- 1.5 micro-K, corresponding to sigma(sub 8) = 1.34 +/- 0.10, with the normalization at large scales being B = (8.16 +/- 1.04) x 10(exp 5)(Mpc/h)(exp 4), and other numbers given in the table. The measured rms temperature fluctuation smoothed on 10 deg is a little low relative to this normalization. This is mainly due to the low quadrupole in the data: when the quadrupole is removed, the measured value of sigma(10 deg) is quite consistent with the best-fitting the mean value of Q. The use of the mean value of Q should be preferred over sigma(10 deg), when its value can be determined for a particular theory, since it makes full use of the data.

  11. Pharmacokinetic-pharmacodynamic modeling of diclofenac in normal and Freund's complete adjuvant-induced arthritic rats

    Science.gov (United States)

    Zhang, Jing; Li, Pei; Guo, Hai-fang; Liu, Li; Liu, Xiao-dong

    2012-01-01

    Aim: To characterize pharmacokinetic-pharmacodynamic modeling of diclofenac in Freund's complete adjuvant (FCA)-induced arthritic rats using prostaglandin E2 (PGE2) as a biomarker. Methods: The pharmacokinetics of diclofenac was investigated using 20-day-old arthritic rats. PGE2 level in the rats was measured using an enzyme immunoassay. A pharmacokinetic-pharmacodynamic (PK-PD) model was developed to illustrate the relationship between the plasma concentration of diclofenac and the inhibition of PGE2 production. The inhibition of diclofenac on lipopolysaccharide (LPS)-induced PGE2 production in blood cells was investigated in vitro. Results: Similar pharmacokinetic behavior of diclofenac was found both in normal and FCA-induced arthritic rats. Diclofenac significantly decreased the plasma levels of PGE2 in both normal and arthritic rats. The inhibitory effect on PGE2 levels in the plasma was in proportion to the plasma concentration of diclofenac. No delay in the onset of inhibition was observed, suggesting that the effect compartment was located in the central compartment. An inhibitory effect sigmoid Imax model was selected to characterize the relationship between the plasma concentration of diclofenac and the inhibition of PGE2 production in vivo. The Imax model was also used to illustrate the inhibition of diclofenac on LPS-induced PGE2 production in blood cells in vitro. Conclusion: Arthritis induced by FCA does not alter the pharmacokinetic behaviors of diclofenac in rats, but the pharmacodynamics of diclofenac is slightly affected. A PK-PD model characterizing an inhibitory effect sigmoid Imax can be used to fit the relationship between the plasma PGE2 and diclofenac levels in both normal rats and FCA-induced arthritic rats. PMID:22842736

  12. Time domain contact model for tyre/road interaction including nonlinear contact stiffness due to small-scale roughness

    Science.gov (United States)

    Andersson, P. B. U.; Kropp, W.

    2008-11-01

    Rolling resistance, traction, wear, excitation of vibrations, and noise generation are all attributes to consider in optimisation of the interaction between automotive tyres and wearing courses of roads. The key to understand and describe the interaction is to include a wide range of length scales in the description of the contact geometry. This means including scales on the order of micrometres that have been neglected in previous tyre/road interaction models. A time domain contact model for the tyre/road interaction that includes interfacial details is presented. The contact geometry is discretised into multiple elements forming pairs of matching points. The dynamic response of the tyre is calculated by convolving the contact forces with pre-calculated Green's functions. The smaller-length scales are included by using constitutive interfacial relations, i.e. by using nonlinear contact springs, for each pair of contact elements. The method is presented for normal (out-of-plane) contact and a method for assessing the stiffness of the nonlinear springs based on detailed geometry and elastic data of the tread is suggested. The governing equations of the nonlinear contact problem are solved with the Newton-Raphson iterative scheme. Relations between force, indentation, and contact stiffness are calculated for a single tread block in contact with a road surface. The calculated results have the same character as results from measurements found in literature. Comparison to traditional contact formulations shows that the effect of the small-scale roughness is large; the contact stiffness is only up to half of the stiffness that would result if contact is made over the whole element directly to the bulk of the tread. It is concluded that the suggested contact formulation is a suitable model to include more details of the contact interface. Further, the presented result for the tread block in contact with the road is a suitable input for a global tyre/road interaction model

  13. Evaluation of normalization methods in mammalian microRNA-Seq data

    Science.gov (United States)

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  14. BALANCED SCORECARDS EVALUATION MODEL THAT INCLUDES ELEMENTS OF ENVIRONMENTAL MANAGEMENT SYSTEM USING AHP MODEL

    Directory of Open Access Journals (Sweden)

    Jelena Jovanović

    2010-03-01

    Full Text Available The research is oriented on improvement of environmental management system (EMS using BSC (Balanced Scorecard model that presents strategic model of measurem ents and improvement of organisational performance. The research will present approach of objectives and environmental management me trics involvement (proposed by literature review in conventional BSC in "Ad Barska plovi dba" organisation. Further we will test creation of ECO-BSC model based on business activities of non-profit organisations in order to improve envir onmental management system in parallel with other systems of management. Using this approach we may obtain 4 models of BSC that includ es elements of environmen tal management system for AD "Barska plovidba". Taking into acc ount that implementation and evaluation need long period of time in AD "Barska plovidba", the final choice will be based on 14598 (Information technology - Software product evaluation and ISO 9126 (Software engineering - Product quality using AHP method. Those standards are usually used for evaluation of quality software product and computer programs that serve in organisation as support and factors for development. So, AHP model will be bas ed on evolution criteria based on suggestion of ISO 9126 standards and types of evaluation from two evaluation teams. Members of team & will be experts in BSC and environmental management system that are not em ployed in AD "Barska Plovidba" organisation. The members of team 2 will be managers of AD "Barska Plovidba" organisation (including manage rs from environmental department. Merging results based on previously cr eated two AHP models, one can obtain the most appropriate BSC that includes elements of environmental management system. The chosen model will present at the same time suggestion for approach choice including ecological metrics in conventional BSC model for firm that has at least one ECO strategic orientation.

  15. Genetic Analysis of Somatic Cell Score in Danish Holsteins Using a Liability-Normal Mixture Model

    DEFF Research Database (Denmark)

    Madsen, P; Shariati, M M; Ødegård, J

    2008-01-01

    Mixture models are appealing for identifying hidden structures affecting somatic cell score (SCS) data, such as unrecorded cases of subclinical mastitis. Thus, liability-normal mixture (LNM) models were used for genetic analysis of SCS data, with the aim of predicting breeding values for such cas...

  16. Double-gate junctionless transistor model including short-channel effects

    International Nuclear Information System (INIS)

    Paz, B C; Pavanello, M A; Ávila-Herrera, F; Cerdeira, A

    2015-01-01

    This work presents a physically based model for double-gate junctionless transistors (JLTs), continuous in all operation regimes. To describe short-channel transistors, short-channel effects (SCEs), such as increase of the channel potential due to drain bias, carrier velocity saturation and mobility degradation due to vertical and longitudinal electric fields, are included in a previous model developed for long-channel double-gate JLTs. To validate the model, an analysis is made by using three-dimensional numerical simulations performed in a Sentaurus Device Simulator from Synopsys. Different doping concentrations, channel widths and channel lengths are considered in this work. Besides that, the series resistance influence is numerically included and validated for a wide range of source and drain extensions. In order to check if the SCEs are appropriately described, besides drain current, transconductance and output conductance characteristics, the following parameters are analyzed to demonstrate the good agreement between model and simulation and the SCEs occurrence in this technology: threshold voltage (V TH ), subthreshold slope (S) and drain induced barrier lowering. (paper)

  17. Model for safety reports including descriptive examples

    International Nuclear Information System (INIS)

    1995-12-01

    Several safety reports will be produced in the process of planning and constructing the system for disposal of high-level radioactive waste in Sweden. The present report gives a model, with detailed examples, of how these reports should be organized and what steps they should include. In the near future safety reports will deal with the encapsulation plant and the repository. Later reports will treat operation of the handling systems and the repository

  18. Modelling a linear PM motor including magnetic saturation

    NARCIS (Netherlands)

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.

    2002-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  19. Time-domain simulation of constitutive relations for nonlinear acoustics including relaxation for frequency power law attenuation media modeling

    Science.gov (United States)

    Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.

    2015-10-01

    We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.

  20. Sub-cellular force microscopy in single normal and cancer cells

    Energy Technology Data Exchange (ETDEWEB)

    Babahosseini, H. [VT MEMS Laboratory, The Bradley Department of Electrical and Computer Engineering, Blacksburg, VA 24061 (United States); Carmichael, B. [Nonlinear Intelligent Structures Laboratory, Department of Mechanical Engineering, University of Alabama, Tuscaloosa, AL 35487-0276 (United States); Strobl, J.S. [VT MEMS Laboratory, The Bradley Department of Electrical and Computer Engineering, Blacksburg, VA 24061 (United States); Mahmoodi, S.N., E-mail: nmahmoodi@eng.ua.edu [Nonlinear Intelligent Structures Laboratory, Department of Mechanical Engineering, University of Alabama, Tuscaloosa, AL 35487-0276 (United States); Agah, M., E-mail: agah@vt.edu [VT MEMS Laboratory, The Bradley Department of Electrical and Computer Engineering, Blacksburg, VA 24061 (United States)

    2015-08-07

    This work investigates the biomechanical properties of sub-cellular structures of breast cells using atomic force microscopy (AFM). The cells are modeled as a triple-layered structure where the Generalized Maxwell model is applied to experimental data from AFM stress-relaxation tests to extract the elastic modulus, the apparent viscosity, and the relaxation time of sub-cellular structures. The triple-layered modeling results allow for determination and comparison of the biomechanical properties of the three major sub-cellular structures between normal and cancerous cells: the up plasma membrane/actin cortex, the mid cytoplasm/nucleus, and the low nuclear/integrin sub-domains. The results reveal that the sub-domains become stiffer and significantly more viscous with depth, regardless of cell type. In addition, there is a decreasing trend in the average elastic modulus and apparent viscosity of the all corresponding sub-cellular structures from normal to cancerous cells, which becomes most remarkable in the deeper sub-domain. The presented modeling in this work constitutes a unique AFM-based experimental framework to study the biomechanics of sub-cellular structures. - Highlights: • The cells are modeled as a triple-layered structure using Generalized Maxwell model. • The sub-domains include membrane/cortex, cytoplasm/nucleus, and nuclear/integrin. • Biomechanics of corresponding sub-domains are compared among normal and cancer cells. • Viscoelasticity of sub-domains show a decreasing trend from normal to cancer cells. • The decreasing trend becomes most significant in the deeper sub-domain.

  1. Sub-cellular force microscopy in single normal and cancer cells

    International Nuclear Information System (INIS)

    Babahosseini, H.; Carmichael, B.; Strobl, J.S.; Mahmoodi, S.N.; Agah, M.

    2015-01-01

    This work investigates the biomechanical properties of sub-cellular structures of breast cells using atomic force microscopy (AFM). The cells are modeled as a triple-layered structure where the Generalized Maxwell model is applied to experimental data from AFM stress-relaxation tests to extract the elastic modulus, the apparent viscosity, and the relaxation time of sub-cellular structures. The triple-layered modeling results allow for determination and comparison of the biomechanical properties of the three major sub-cellular structures between normal and cancerous cells: the up plasma membrane/actin cortex, the mid cytoplasm/nucleus, and the low nuclear/integrin sub-domains. The results reveal that the sub-domains become stiffer and significantly more viscous with depth, regardless of cell type. In addition, there is a decreasing trend in the average elastic modulus and apparent viscosity of the all corresponding sub-cellular structures from normal to cancerous cells, which becomes most remarkable in the deeper sub-domain. The presented modeling in this work constitutes a unique AFM-based experimental framework to study the biomechanics of sub-cellular structures. - Highlights: • The cells are modeled as a triple-layered structure using Generalized Maxwell model. • The sub-domains include membrane/cortex, cytoplasm/nucleus, and nuclear/integrin. • Biomechanics of corresponding sub-domains are compared among normal and cancer cells. • Viscoelasticity of sub-domains show a decreasing trend from normal to cancer cells. • The decreasing trend becomes most significant in the deeper sub-domain

  2. Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling

    International Nuclear Information System (INIS)

    Karvonen, T.

    2013-11-01

    Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from

  3. Olkiluoto surface hydrological modelling: Update 2012 including salt transport modelling

    Energy Technology Data Exchange (ETDEWEB)

    Karvonen, T. [WaterHope, Helsinki (Finland)

    2013-11-15

    Posiva Oy is responsible for implementing a final disposal program for spent nuclear fuel of its owners Teollisuuden Voima Oyj and Fortum Power and Heat Oy. The spent nuclear fuel is planned to be disposed at a depth of about 400-450 meters in the crystalline bedrock at the Olkiluoto site. Leakages located at or close to spent fuel repository may give rise to the upconing of deep highly saline groundwater and this is a concern with regard to the performance of the tunnel backfill material after the closure of the tunnels. Therefore a salt transport sub-model was added to the Olkiluoto surface hydrological model (SHYD). The other improvements include update of the particle tracking algorithm and possibility to estimate the influence of open drillholes in a case where overpressure in inflatable packers decreases causing a hydraulic short-circuit between hydrogeological zones HZ19 and HZ20 along the drillhole. Four new hydrogeological zones HZ056, HZ146, BFZ100 and HZ039 were added to the model. In addition, zones HZ20A and HZ20B intersect with each other in the new structure model, which influences salinity upconing caused by leakages in shafts. The aim of the modelling of long-term influence of ONKALO, shafts and repository tunnels provide computational results that can be used to suggest limits for allowed leakages. The model input data included all the existing leakages into ONKALO (35-38 l/min) and shafts in the present day conditions. The influence of shafts was computed using eight different values for total shaft leakage: 5, 11, 20, 30, 40, 50, 60 and 70 l/min. The selection of the leakage criteria for shafts was influenced by the fact that upconing of saline water increases TDS-values close to the repository areas although HZ20B does not intersect any deposition tunnels. The total limit for all leakages was suggested to be 120 l/min. The limit for HZ20 zones was proposed to be 40 l/min: about 5 l/min the present day leakages to access tunnel, 25 l/min from

  4. Visual attention and flexible normalization pools

    Science.gov (United States)

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  5. Comparison of spectrum normalization techniques for univariate ...

    Indian Academy of Sciences (India)

    Laser-induced breakdown spectroscopy; univariate study; normalization models; stainless steel; standard error of prediction. Abstract. Analytical performance of six different spectrum normalization techniques, namelyinternal normalization, normalization with total light, normalization with background along with their ...

  6. Target normal sheath acceleration analytical modeling, comparative study and developments

    International Nuclear Information System (INIS)

    Perego, C.; Batani, D.; Zani, A.; Passoni, M.

    2012-01-01

    Ultra-intense laser interaction with solid targets appears to be an extremely promising technique to accelerate ions up to several MeV, producing beams that exhibit interesting properties for many foreseen applications. Nowadays, most of all the published experimental results can be theoretically explained in the framework of the target normal sheath acceleration (TNSA) mechanism proposed by Wilks et al. [Phys. Plasmas 8(2), 542 (2001)]. As an alternative to numerical simulation various analytical or semi-analytical TNSA models have been published in the latest years, each of them trying to provide predictions for some of the ion beam features, given the initial laser and target parameters. However, the problem of developing a reliable model for the TNSA process is still open, which is why the purpose of this work is to enlighten the present situation of TNSA modeling and experimental results, by means of a quantitative comparison between measurements and theoretical predictions of the maximum ion energy. Moreover, in the light of such an analysis, some indications for the future development of the model proposed by Passoni and Lontano [Phys. Plasmas 13(4), 042102 (2006)] are then presented.

  7. Multicompartmental model for iodide, thyroxine, and triiodothyronine metabolism in normal and spontaneously hyperthyroid cats

    Energy Technology Data Exchange (ETDEWEB)

    Hays, M.T.; Broome, M.R.; Turrel, J.M.

    1988-06-01

    A comprehensive multicompartmental kinetic model was developed to account for the distribution and metabolism of simultaneously injected radioactive iodide (iodide*), T3 (T3*), and T4 (T4*) in six normal and seven spontaneously hyperthyroid cats. Data from plasma samples (analyzed by HPLC), urine, feces, and thyroid accumulation were incorporated into the model. The submodels for iodide*, T3*, and T4* all included both a fast and a slow exchange compartment connecting with the plasma compartment. The best-fit iodide* model also included a delay compartment, presumed to be pooling of gastrosalivary secretions. This delay was 62% longer in the hyperthyroid cats than in the euthyroid cats. Unexpectedly, all of the exchange parameters for both T4 and T3 were significantly slowed in hyperthyroidism, possibly because the hyperthyroid cats were older. None of the plasma equivalent volumes of the exchange compartments of iodide*, T3*, or T4* was significantly different in the hyperthyroid cats, although the plasma equivalent volume of the fast T4 exchange compartments were reduced. Secretion of recycled T4* from the thyroid into the plasma T4* compartment was essential to model fit, but its quantity could not be uniquely identified in the absence of multiple thyroid data points. Thyroid secretion of T3* was not detectable. Comparing the fast and slow compartments, there was a shift of T4* deiodination into the fast exchange compartment in hyperthyroidism. Total body mean residence times (MRTs) of iodide* and T3* were not affected by hyperthyroidism, but mean T4* MRT was decreased 23%. Total fractional T4 to T3 conversion was unchanged in hyperthyroidism, although the amount of T3 produced by this route was increased nearly 5-fold because of higher concentrations of donor stable T4.

  8. Multicompartmental model for iodide, thyroxine, and triiodothyronine metabolism in normal and spontaneously hyperthyroid cats

    International Nuclear Information System (INIS)

    Hays, M.T.; Broome, M.R.; Turrel, J.M.

    1988-01-01

    A comprehensive multicompartmental kinetic model was developed to account for the distribution and metabolism of simultaneously injected radioactive iodide (iodide*), T3 (T3*), and T4 (T4*) in six normal and seven spontaneously hyperthyroid cats. Data from plasma samples (analyzed by HPLC), urine, feces, and thyroid accumulation were incorporated into the model. The submodels for iodide*, T3*, and T4* all included both a fast and a slow exchange compartment connecting with the plasma compartment. The best-fit iodide* model also included a delay compartment, presumed to be pooling of gastrosalivary secretions. This delay was 62% longer in the hyperthyroid cats than in the euthyroid cats. Unexpectedly, all of the exchange parameters for both T4 and T3 were significantly slowed in hyperthyroidism, possibly because the hyperthyroid cats were older. None of the plasma equivalent volumes of the exchange compartments of iodide*, T3*, or T4* was significantly different in the hyperthyroid cats, although the plasma equivalent volume of the fast T4 exchange compartments were reduced. Secretion of recycled T4* from the thyroid into the plasma T4* compartment was essential to model fit, but its quantity could not be uniquely identified in the absence of multiple thyroid data points. Thyroid secretion of T3* was not detectable. Comparing the fast and slow compartments, there was a shift of T4* deiodination into the fast exchange compartment in hyperthyroidism. Total body mean residence times (MRTs) of iodide* and T3* were not affected by hyperthyroidism, but mean T4* MRT was decreased 23%. Total fractional T4 to T3 conversion was unchanged in hyperthyroidism, although the amount of T3 produced by this route was increased nearly 5-fold because of higher concentrations of donor stable T4

  9. Normal and Pathological NCAT Image and PhantomData Based onPhysiologically Realistic Left Ventricle Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui,Benjamin M.W.; Gullberg, Grant T.

    2006-08-02

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, whichprovides a realistic model of the normal human anatomy and cardiac andrespiratory motions, is used in medical imaging research to evaluate andimprove imaging devices and techniques, especially dynamic cardiacapplications. One limitation of the phantom is that it lacks the abilityto accurately simulate altered functions of the heart that result fromcardiac pathologies such as coronary artery disease (CAD). The goal ofthis work was to enhance the 4D NCAT phantom by incorporating aphysiologically based, finite-element (FE) mechanical model of the leftventricle (LV) to simulate both normal and abnormal cardiac motions. Thegeometry of the FE mechanical model was based on gated high-resolutionx-ray multi-slice computed tomography (MSCT) data of a healthy malesubject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees atthe epicardial surface, through 0 degreesat the mid-wall, to 90 degreesat the endocardial surface. A time varying elastance model was used tosimulate fiber contraction, and physiological intraventricular systolicpressure-time curves were applied to simulate the cardiac motion over theentire cardiac cycle. To demonstrate the ability of the FE mechanicalmodel to accurately simulate the normal cardiac motion as well abnormalmotions indicative of CAD, a normal case and two pathologic cases weresimulated and analyzed. In the first pathologic model, a subendocardialanterior ischemic region was defined. A second model was created with atransmural ischemic region defined in the same location. The FE baseddeformations were incorporated into the 4D NCAT cardiac model through thecontrol points that define the cardiac structures in the phantom whichwere set to move according to the predictions of the mechanical model. Asimulation study was performed using the FE-NCAT combination toinvestigate how the differences in contractile function

  10. Mathematical model of normal tissue injury in telegammatherapy

    International Nuclear Information System (INIS)

    Belov, S.A.; Lyass, F.M.; Mamin, R.G.; Minakova, E.I.; Raevskaya, S.A.

    1983-01-01

    A model of normal tissue injury as a result of exposure to ionizing radiation is based on an assumption that the degree of tissue injury is determined by the degree of destruction by certain critical cells. The dependence of the number of lethal injuriies on a single dose is expressed by a trinomial - linear and quadratic parts and a constant, obtained as a result of the processing of experimental data. Quantitative correlations have been obtained for the skin and brain. They have been tested using clinical and experimental material. The results of the testing point out to the absence of time dependence on a single up to 6-week irradiation cources. Correlation with an irradiation field has been obtained for the skin. A conclusion has been made that the concept of isoefficacy of irradiation cources is conditional. Spatial-time fractionation is a promising direction in the development of radiation therapy

  11. Study of a diffusion flamelet model, with preferential diffusion effects included

    NARCIS (Netherlands)

    Delhaye, S.; Somers, L.M.T.; Bongers, H.; Oijen, van J.A.; Goey, de L.P.H.; Dias, V.

    2005-01-01

    The non-premixed flamelet model of Peters [1] (model1), which does not include preferential diffusion effects is investigated. Two similar models are presented, but without the assumption of unity Lewis numbers. One of these models was derived by Peters & Pitsch [2] (model2), while the other one was

  12. Normal and Fibrotic Rat Livers Demonstrate Shear Strain Softening and Compression Stiffening: A Model for Soft Tissue Mechanics.

    Directory of Open Access Journals (Sweden)

    Maryna Perepelyuk

    Full Text Available Tissues including liver stiffen and acquire more extracellular matrix with fibrosis. The relationship between matrix content and stiffness, however, is non-linear, and stiffness is only one component of tissue mechanics. The mechanical response of tissues such as liver to physiological stresses is not well described, and models of tissue mechanics are limited. To better understand the mechanics of the normal and fibrotic rat liver, we carried out a series of studies using parallel plate rheometry, measuring the response to compressive, extensional, and shear strains. We found that the shear storage and loss moduli G' and G" and the apparent Young's moduli measured by uniaxial strain orthogonal to the shear direction increased markedly with both progressive fibrosis and increasing compression, that livers shear strain softened, and that significant increases in shear modulus with compressional stress occurred within a range consistent with increased sinusoidal pressures in liver disease. Proteoglycan content and integrin-matrix interactions were significant determinants of liver mechanics, particularly in compression. We propose a new non-linear constitutive model of the liver. A key feature of this model is that, while it assumes overall liver incompressibility, it takes into account water flow and solid phase compressibility. In sum, we report a detailed study of non-linear liver mechanics under physiological strains in the normal state, early fibrosis, and late fibrosis. We propose a constitutive model that captures compression stiffening, tension softening, and shear softening, and can be understood in terms of the cellular and matrix components of the liver.

  13. 76 FR 36864 - Special Conditions: Gulfstream Model GVI Airplane; Operation Without Normal Electric Power

    Science.gov (United States)

    2011-06-23

    ... Normal Electric Power AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special... Interface Branch, ANM-111, Transport Standards Staff, Transport Airplane Directorate, Aircraft Certification... Model GVI airplane will be an all-new, two- engine jet transport airplane. The maximum takeoff weight...

  14. SEEPAGE MODEL FOR PA INCLUDING DRIFT COLLAPSE

    International Nuclear Information System (INIS)

    C. Tsang

    2004-01-01

    The purpose of this report is to document the predictions and analyses performed using the seepage model for performance assessment (SMPA) for both the Topopah Spring middle nonlithophysal (Tptpmn) and lower lithophysal (Tptpll) lithostratigraphic units at Yucca Mountain, Nevada. Look-up tables of seepage flow rates into a drift (and their uncertainty) are generated by performing numerical simulations with the seepage model for many combinations of the three most important seepage-relevant parameters: the fracture permeability, the capillary-strength parameter 1/a, and the percolation flux. The percolation flux values chosen take into account flow focusing effects, which are evaluated based on a flow-focusing model. Moreover, multiple realizations of the underlying stochastic permeability field are conducted. Selected sensitivity studies are performed, including the effects of an alternative drift geometry representing a partially collapsed drift from an independent drift-degradation analysis (BSC 2004 [DIRS 166107]). The intended purpose of the seepage model is to provide results of drift-scale seepage rates under a series of parameters and scenarios in support of the Total System Performance Assessment for License Application (TSPA-LA). The SMPA is intended for the evaluation of drift-scale seepage rates under the full range of parameter values for three parameters found to be key (fracture permeability, the van Genuchten 1/a parameter, and percolation flux) and drift degradation shape scenarios in support of the TSPA-LA during the period of compliance for postclosure performance [Technical Work Plan for: Performance Assessment Unsaturated Zone (BSC 2002 [DIRS 160819], Section I-4-2-1)]. The flow-focusing model in the Topopah Spring welded (TSw) unit is intended to provide an estimate of flow focusing factors (FFFs) that (1) bridge the gap between the mountain-scale and drift-scale models, and (2) account for variability in local percolation flux due to

  15. A numerical insight into elastomer normally closed micro valve actuation with cohesive interfacial cracking modelling

    Science.gov (United States)

    Wang, Dongyang; Ba, Dechun; Hao, Ming; Duan, Qihui; Liu, Kun; Mei, Qi

    2018-05-01

    Pneumatic NC (normally closed) valves are widely used in high density microfluidics systems. To improve actuation reliability, the actuation pressure needs to be reduced. In this work, we utilize 3D FEM (finite element method) modelling to get an insight into the valve actuation process numerically. Specifically, the progressive debonding process at the elastomer interface is simulated with CZM (cohesive zone model) method. To minimize the actuation pressure, the V-shape design has been investigated and compared with a normal straight design. The geometrical effects of valve shape has been elaborated, in terms of valve actuation pressure. Based on our simulated results, we formulate the main concerns for micro valve design and fabrication, which is significant for minimizing actuation pressures and ensuring reliable operation.

  16. Atmosphere-soil-vegetation model including CO2 exchange processes: SOLVEG2

    International Nuclear Information System (INIS)

    Nagai, Haruyasu

    2004-11-01

    A new atmosphere-soil-vegetation model named SOLVEG2 (SOLVEG version 2) was developed to study the heat, water, and CO 2 exchanges between the atmosphere and land-surface. The model consists of one-dimensional multilayer sub-models for the atmosphere, soil, and vegetation. It also includes sophisticated processes for solar and long-wave radiation transmission in vegetation canopy and CO 2 exchanges among the atmosphere, soil, and vegetation. Although the model usually simulates only vertical variation of variables in the surface-layer atmosphere, soil, and vegetation canopy by using meteorological data as top boundary conditions, it can be used by coupling with a three-dimensional atmosphere model. In this paper, details of SOLVEG2, which includes the function of coupling with atmosphere model MM5, are described. (author)

  17. Exclusive queueing model including the choice of service windows

    Science.gov (United States)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  18. Normal and Pathological NCAT Image and Phantom Data Based on Physiologically Realistic Left Ventricle Finite-Element Models

    International Nuclear Information System (INIS)

    Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.; Tsui, Benjamin M.W.; Gullberg, Grant T.

    2006-01-01

    The 4D NURBS-based Cardiac-Torso (NCAT) phantom, which provides a realistic model of the normal human anatomy and cardiac and respiratory motions, is used in medical imaging research to evaluate and improve imaging devices and techniques, especially dynamic cardiac applications. One limitation of the phantom is that it lacks the ability to accurately simulate altered functions of the heart that result from cardiac pathologies such as coronary artery disease (CAD). The goal of this work was to enhance the 4D NCAT phantom by incorporating a physiologically based, finite-element (FE) mechanical model of the left ventricle (LV) to simulate both normal and abnormal cardiac motions. The geometry of the FE mechanical model was based on gated high-resolution x-ray multi-slice computed tomography (MSCT) data of a healthy male subject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees at the epicardial surface, through 0 degrees at the mid-wall, to 90 degrees at the endocardial surface. A time varying elastance model was used to simulate fiber contraction, and physiological intraventricular systolic pressure-time curves were applied to simulate the cardiac motion over the entire cardiac cycle. To demonstrate the ability of the FE mechanical model to accurately simulate the normal cardiac motion as well abnormal motions indicative of CAD, a normal case and two pathologic cases were simulated and analyzed. In the first pathologic model, a subendocardial anterior ischemic region was defined. A second model was created with a transmural ischemic region defined in the same location. The FE based deformations were incorporated into the 4D NCAT cardiac model through the control points that define the cardiac structures in the phantom which were set to move according to the predictions of the mechanical model. A simulation study was performed using the FE-NCAT combination to investigate how the

  19. Attention and normalization circuits in macaque V1

    Science.gov (United States)

    Sanayei, M; Herrero, J L; Distler, C; Thiele, A

    2015-01-01

    Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. PMID:25757941

  20. Normal modified stable processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    Gaussian (NGIG) laws. The wider framework thus established provides, in particular, for added flexibility in the modelling of the dynamics of financial time series, of importance especially as regards OU based stochastic volatility models for equities. In the special case of the tempered stable OU process......This paper discusses two classes of distributions, and stochastic processes derived from them: modified stable (MS) laws and normal modified stable (NMS) laws. This extends corresponding results for the generalised inverse Gaussian (GIG) and generalised hyperbolic (GH) or normal generalised inverse...

  1. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    Science.gov (United States)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  2. Des proprietes de l'etat normal du modele de Hubbard bidimensionnel

    Science.gov (United States)

    Lemay, Francois

    Depuis leur decouverte, les etudes experimentales ont demontre que les supra-conducteurs a haute temperature ont une phase normale tres etrange. Les proprietes de ces materiaux ne sont pas bien decrites par la theorie du liquide de Fermi. Le modele de Hubbard bidimensionnel, bien qu'il ne soit pas encore resolu, est toujours considere comme un candidat pour expliquer la physique de ces composes. Dans cet ouvrage, nous mettons en evidence plusieurs proprietes electroniques du modele qui sont incompatibles avec l'existence de quasi-particules. Nous montrons notamment que la susceptibilite des electrons libres sur reseau contient des singularites logarithmiques qui influencent de facon determinante les proprietes de la self-energie a basse frequence. Ces singularites sont responsables de la destruction des quasi-particules. En l'absence de fluctuations antiferromagnetiques, elles sont aussi responsables de l'existence d'un petit pseudogap dans le poids spectral au niveau de Fermi. Les proprietes du modele sont egalement etudiees pour une surface de Fermi similaire a celle des supraconducteurs a haute temperature. Un parallele est etabli entre certaines caracteristiques du modele et celles de ces materiaux.

  3. A micromechanical study of porous composites under longitudinal shear and transverse normal loading

    DEFF Research Database (Denmark)

    Ashouri Vajari, Danial

    2015-01-01

    The mechanical response of porous unidirectional composites under transverse normal and longitudinal shear loading is studied using the finite element analysis. The 3D model includes discrete and random distribution of fibers and voids. The micromechanical failure mechanisms are taken into account....... Finally, the computational prediction of the porous composite in the transverse normal-longitudinal shear stress space is obtained and compared with Puck's model. The results show that both interfaces with low fracture toughness and microvoids with even small void volume fraction can significantly reduce...

  4. Attention and normalization circuits in macaque V1.

    Science.gov (United States)

    Sanayei, M; Herrero, J L; Distler, C; Thiele, A

    2015-04-01

    Attention affects neuronal processing and improves behavioural performance. In extrastriate visual cortex these effects have been explained by normalization models, which assume that attention influences the circuit that mediates surround suppression. While normalization models have been able to explain attentional effects, their validity has rarely been tested against alternative models. Here we investigate how attention and surround/mask stimuli affect neuronal firing rates and orientation tuning in macaque V1. Surround/mask stimuli provide an estimate to what extent V1 neurons are affected by normalization, which was compared against effects of spatial top down attention. For some attention/surround effect comparisons, the strength of attentional modulation was correlated with the strength of surround modulation, suggesting that attention and surround/mask stimulation (i.e. normalization) might use a common mechanism. To explore this in detail, we fitted multiplicative and additive models of attention to our data. In one class of models, attention contributed to normalization mechanisms, whereas in a different class of models it did not. Model selection based on Akaike's and on Bayesian information criteria demonstrated that in most cells the effects of attention were best described by models where attention did not contribute to normalization mechanisms. This demonstrates that attentional influences on neuronal responses in primary visual cortex often bypass normalization mechanisms. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Key Characteristics of Combined Accident including TLOFW accident for PSA Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bo Gyung; Kang, Hyun Gook [KAIST, Daejeon (Korea, Republic of); Yoon, Ho Joon [Khalifa University of Science, Technology and Research, Abu Dhabi (United Arab Emirates)

    2015-05-15

    The conventional PSA techniques cannot adequately evaluate all events. The conventional PSA models usually focus on single internal events such as DBAs, the external hazards such as fire, seismic. However, the Fukushima accident of Japan in 2011 reveals that very rare event is necessary to be considered in the PSA model to prevent the radioactive release to environment caused by poor treatment based on lack of the information, and to improve the emergency operation procedure. Especially, the results from PSA can be used to decision making for regulators. Moreover, designers can consider the weakness of plant safety based on the quantified results and understand accident sequence based on human actions and system availability. This study is for PSA modeling of combined accidents including total loss of feedwater (TLOFW) accident. The TLOFW accident is a representative accident involving the failure of cooling through secondary side. If the amount of heat transfer is not enough due to the failure of secondary side, the heat will be accumulated to the primary side by continuous core decay heat. Transients with loss of feedwater include total loss of feedwater accident, loss of condenser vacuum accident, and closure of all MSIVs. When residual heat removal by the secondary side is terminated, the safety injection into the RCS with direct primary depressurization would provide alternative heat removal. This operation is called feed and bleed (F and B) operation. Combined accidents including TLOFW accident are very rare event and partially considered in conventional PSA model. Since the necessity of F and B operation is related to plant conditions, the PSA modeling for combined accidents including TLOFW accident is necessary to identify the design and operational vulnerabilities.The PSA is significant to assess the risk of NPPs, and to identify the design and operational vulnerabilities. Even though the combined accident is very rare event, the consequence of combined

  6. Statistical mechanics of normal grain growth in one dimension: A partial integro-differential equation model

    International Nuclear Information System (INIS)

    Ng, Felix S.L.

    2016-01-01

    We develop a statistical-mechanical model of one-dimensional normal grain growth that does not require any drift-velocity parameterization for grain size, such as used in the continuity equation of traditional mean-field theories. The model tracks the population by considering grain sizes in neighbour pairs; the probability of a pair having neighbours of certain sizes is determined by the size-frequency distribution of all pairs. Accordingly, the evolution obeys a partial integro-differential equation (PIDE) over ‘grain size versus neighbour grain size’ space, so that the grain-size distribution is a projection of the PIDE's solution. This model, which is applicable before as well as after statistically self-similar grain growth has been reached, shows that the traditional continuity equation is invalid outside this state. During statistically self-similar growth, the PIDE correctly predicts the coarsening rate, invariant grain-size distribution and spatial grain size correlations observed in direct simulations. The PIDE is then reducible to the standard continuity equation, and we derive an explicit expression for the drift velocity. It should be possible to formulate similar parameterization-free models of normal grain growth in two and three dimensions.

  7. Viscous flow features in scaled-up physical models of normal and pathological vocal phonation

    Energy Technology Data Exchange (ETDEWEB)

    Erath, Byron D., E-mail: berath@purdue.ed [School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, IN 47907 (United States); Plesniak, Michael W., E-mail: plesniak@gwu.ed [Department of Mechanical and Aerospace Engineering, George Washington University, 801 22nd Street NW, Suite 739, Washington, DC 20052 (United States)

    2010-06-15

    Unilateral vocal fold paralysis results when the recurrent laryngeal nerve, which innervates the muscles of the vocal folds becomes damaged. The loss of muscle and tension control to the damaged vocal fold renders it ineffectual. The mucosal wave disappears during phonation, and the vocal fold becomes largely immobile. The influence of unilateral vocal fold paralysis on the viscous flow development, which impacts speech quality within the glottis during phonation was investigated. Driven, scaled-up vocal fold models were employed to replicate both normal and pathological patterns of vocal fold motion. Spatial and temporal velocity fields were captured using particle image velocimetry, and laser Doppler velocimetry. Flow parameters were scaled to match the physiological values associated with human speech. Loss of motion in one vocal fold resulted in a suppression of typical glottal flow fields, including decreased spatial variability in the location of the flow separation point throughout the phonatory cycle, as well as a decrease in the vorticity magnitude.

  8. Viscous flow features in scaled-up physical models of normal and pathological vocal phonation

    International Nuclear Information System (INIS)

    Erath, Byron D.; Plesniak, Michael W.

    2010-01-01

    Unilateral vocal fold paralysis results when the recurrent laryngeal nerve, which innervates the muscles of the vocal folds becomes damaged. The loss of muscle and tension control to the damaged vocal fold renders it ineffectual. The mucosal wave disappears during phonation, and the vocal fold becomes largely immobile. The influence of unilateral vocal fold paralysis on the viscous flow development, which impacts speech quality within the glottis during phonation was investigated. Driven, scaled-up vocal fold models were employed to replicate both normal and pathological patterns of vocal fold motion. Spatial and temporal velocity fields were captured using particle image velocimetry, and laser Doppler velocimetry. Flow parameters were scaled to match the physiological values associated with human speech. Loss of motion in one vocal fold resulted in a suppression of typical glottal flow fields, including decreased spatial variability in the location of the flow separation point throughout the phonatory cycle, as well as a decrease in the vorticity magnitude.

  9. Longitudinal evaluation of an N-ethyl-N-nitrosourea-created murine model with normal pressure hydrocephalus.

    Directory of Open Access Journals (Sweden)

    Ming-Jen Lee

    Full Text Available BACKGROUND: Normal-pressure hydrocephalus (NPH is a neurodegenerative disorder that usually occurs late in adult life. Clinically, the cardinal features include gait disturbances, urinary incontinence, and cognitive decline. METHODOLOGY/PRINCIPAL FINDINGS: Herein we report the characterization of a novel mouse model of NPH (designated p23-ST1, created by N-ethyl-N-nitrosourea (ENU-induced mutagenesis. The ventricular size in the brain was measured by 3-dimensional micro-magnetic resonance imaging (3D-MRI and was found to be enlarged. Intracranial pressure was measured and was found to fall within a normal range. A histological assessment and tracer flow study revealed that the cerebral spinal fluid (CSF pathway of p23-ST1 mice was normal without obstruction. Motor functions were assessed using a rotarod apparatus and a CatWalk gait automatic analyzer. Mutant mice showed poor rotarod performance and gait disturbances. Cognitive function was evaluated using auditory fear-conditioned responses with the mutant displaying both short- and long-term memory deficits. With an increase in urination frequency and volume, the mutant showed features of incontinence. Nissl substance staining and cell-type-specific markers were used to examine the brain pathology. These studies revealed concurrent glial activation and neuronal loss in the periventricular regions of mutant animals. In particular, chronically activated microglia were found in septal areas at a relatively young age, implying that microglial activation might contribute to the pathogenesis of NPH. These defects were transmitted in an autosomal dominant mode with reduced penetrance. Using a whole-genome scan employing 287 single-nucleotide polymorphic (SNP markers and further refinement using six additional SNP markers and four microsatellite markers, the causative mutation was mapped to a 5.3-cM region on chromosome 4. CONCLUSIONS/SIGNIFICANCE: Our results collectively demonstrate that the p23-ST1

  10. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  11. Corticocortical feedback increases the spatial extent of normalization.

    Science.gov (United States)

    Nassi, Jonathan J; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a "normalization pool." Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing.

  12. A probabilistic approach using deformable organ models for automatic definition of normal anatomical structures for 3D treatment planning

    International Nuclear Information System (INIS)

    Fritsch, Daniel; Yu Liyun; Johnson, Valen; McAuliffe, Matthew; Pizer, Stephen; Chaney, Edward

    1996-01-01

    Purpose/Objective : Current clinical methods for defining normal anatomical structures on tomographic images are time consuming and subject to intra- and inter-user variability. With the widespread implementation of 3D RTP, conformal radiotherapy, and dose escalation the implications of imprecise object definition have assumed a much higher level of importance. Object definition and volume-weighted metrics for normal anatomy, such as DVHs and NTCPs, play critical roles in aiming, shaping, and weighting beams. Improvements in object definition, including computer automation, are essential to yield reliable volume-weighted metrics and gains in human efficiency. The purpose of this study was to investigate a probabilistic approach using deformable models to automatically recognize and extract normal anatomy in tomographic images. Materials and Methods: Object models were created from normal organs that were segmented by an interactive method which involved placing a cursor near the center of the object on a slice and clicking a mouse button to initiate computation of structures called cores. Cores describe the skeletal and boundary shape of image objects in a manner that, in 2D, associates a location on the skeleton with the width of the object at that location. A significant advantage of cores is stability against image disturbances such as noise and blur. The model was composed of a relatively small set of extracted points on the skeleton and boundary. The points were carefully chosen to summarize the shape information captured by the cores. Neighborhood relationships between points were represented mathematically by energy functions that penalize, due to warping of the model, the ''goodness'' of match between the model and the image data at any stage during the segmentation process. The model was matched against the image data using a probabilistic approach based on Bayes theorem, which provides a means for computing a posteriori (posterior) probability from 1) a

  13. Pharmacokinetics of Active Components From Guhong Injection in Normal and Pathological Rat Models of Cerebral Ischemia: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Li Yu

    2018-05-01

    Full Text Available Background and Objectives: Guhong Injection (GHI is usually administered for the treatment of stroke in clinics. Aceglutamide and hydroxyl safflower yellow A (HSYA are its key ingredients for brain protective effect. To investigate the pharmacokinetics of aceglutamide and HSYA under pathological and normal conditions, the pharmacokinetic parameters and characteristics of middle cerebral artery occlusion (MCAO and normal rats given the same dosage of GHI were studied compared.Methods: 12 SD rats were divided into two groups, namely, MCAO and normal groups. Both groups were treated with GHI in the same dosage. Plasma samples were collected from the jaw vein at different time points and subsequently tested by high-performance liquid chromatography (HPLC.Results: After administration of GHI, both aceglutamide and HSYA were immediately detected in the plasma. Ninety percent of aceglutamide and HSYA was eliminated within 3 h. For aceglutamide, statistically significant differences in the parameters including AUC(0−t, AUC(0−∞, AUMC(0−t, AUMC(0−∞, Cmax (P < 0.01, and Vz (P < 0.05. Meanwhile, compared with the MCAO group, in the normal group, the values of AUC(0−t, AUMC(0−t, VRT(0−t, and Cmax (P < 0.01 for HSYA were significantly higher, whereas the value of MRT(0−t was significantly lower in the normal group.Conclusions: The in vivo trials based on the different models showed that, the pharmacokinetic behaviors and parameters of aceglutamide and HSYA in GHI were completely different. These results suggest that the pathological damage of ischemia-reperfusion has a significant impact on the pharmacokinetic traits of aceglutamide and HSYA.

  14. Cracked rotors. A survey on static and dynamic behaviour including modelling and diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Bachschmid, Nicolo; Pennacchi, Paolo; Tanzi, Ezio [Politecnico di Milano (Italy). Dept. of Mechanical Engineering

    2010-07-01

    Cracks can develop in rotating shafts and can propagate to relevant depths without affecting consistently the normal operating conditions of the shaft. In order to avoid catastrophic failures, accurate vibration analyses have to be performed for crack detection. The identification of the crack location and depth is possible by means of a model based diagnostic approach, provided that the model of the crack and the model of the cracked shaft dynamical behavior are accurate and reliable. This monograph shows the typical dynamical behavior of cracked shafts and presents tests for detecting cracks. The book describes how to model cracks, how to simulate the dynamical behavior of cracked shaft, and compares the corresponding numerical with experimental results. All effects of cracks on the vibrations of rotating shafts are analyzed, and some results of a numerical sensitivity analysis of the vibrations to the presence and severity of the crack are shown. Finally the book describes some crack identification procedures and shows some results in model based crack identification in position and depth. The book is useful for higher university courses in mechanical and energetic engineering, but also for skilled technical people employed in power generation industries. (orig.)

  15. A probit- log- skew-normal mixture model for repeated measures data with excess zeros, with application to a cohort study of paediatric respiratory symptoms

    Directory of Open Access Journals (Sweden)

    Johnston Neil W

    2010-06-01

    Full Text Available Abstract Background A zero-inflated continuous outcome is characterized by occurrence of "excess" zeros that more than a single distribution can explain, with the positive observations forming a skewed distribution. Mixture models are employed for regression analysis of zero-inflated data. Moreover, for repeated measures zero-inflated data the clustering structure should also be modeled for an adequate analysis. Methods Diary of Asthma and Viral Infections Study (DAVIS was a one year (2004 cohort study conducted at McMaster University to monitor viral infection and respiratory symptoms in children aged 5-11 years with and without asthma. Respiratory symptoms were recorded daily using either an Internet or paper-based diary. Changes in symptoms were assessed by study staff and led to collection of nasal fluid specimens for virological testing. The study objectives included investigating the response of respiratory symptoms to respiratory viral infection in children with and without asthma over a one year period. Due to sparse data daily respiratory symptom scores were aggregated into weekly average scores. More than 70% of the weekly average scores were zero, with the positive scores forming a skewed distribution. We propose a random effects probit/log-skew-normal mixture model to analyze the DAVIS data. The model parameters were estimated using a maximum marginal likelihood approach. A simulation study was conducted to assess the performance of the proposed mixture model if the underlying distribution of the positive response is different from log-skew normal. Results Viral infection status was highly significant in both probit and log-skew normal model components respectively. The probability of being symptom free was much lower for the week a child was viral positive relative to the week she/he was viral negative. The severity of the symptoms was also greater for the week a child was viral positive. The probability of being symptom free was

  16. Extravascular transport in normal and tumor tissues.

    Science.gov (United States)

    Jain, R K; Gerlowski, L E

    1986-01-01

    The transport characteristics of the normal and tumor tissue extravascular space provide the basis for the determination of the optimal dosage and schedule regimes of various pharmacological agents in detection and treatment of cancer. In order for the drug to reach the cellular space where most therapeutic action takes place, several transport steps must first occur: (1) tissue perfusion; (2) permeation across the capillary wall; (3) transport through interstitial space; and (4) transport across the cell membrane. Any of these steps including intracellular events such as metabolism can be the rate-limiting step to uptake of the drug, and these rate-limiting steps may be different in normal and tumor tissues. This review examines these transport limitations, first from an experimental point of view and then from a modeling point of view. Various types of experimental tumor models which have been used in animals to represent human tumors are discussed. Then, mathematical models of extravascular transport are discussed from the prespective of two approaches: compartmental and distributed. Compartmental models lump one or more sections of a tissue or body into a "compartment" to describe the time course of disposition of a substance. These models contain "effective" parameters which represent the entire compartment. Distributed models consider the structural and morphological aspects of the tissue to determine the transport properties of that tissue. These distributed models describe both the temporal and spatial distribution of a substance in tissues. Each of these modeling techniques is described in detail with applications for cancer detection and treatment in mind.

  17. RELAP5-3D Code Includes ATHENA Features and Models

    International Nuclear Information System (INIS)

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-01-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF 6 , xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  18. Valproic acid prevents retinal degeneration in a murine model of normal tension glaucoma.

    Science.gov (United States)

    Kimura, Atsuko; Guo, Xiaoli; Noro, Takahiko; Harada, Chikako; Tanaka, Kohichi; Namekata, Kazuhiko; Harada, Takayuki

    2015-02-19

    Valproic acid (VPA) is widely used for treatment of epilepsy, mood disorders, migraines and neuropathic pain. It exerts its therapeutic benefits through modulation of multiple mechanisms including regulation of gamma-aminobutyric acid and glutamate neurotransmissions, activation of pro-survival protein kinases and inhibition of histone deacetylase. The evidence for neuroprotective properties associated with VPA is emerging. Herein, we investigated the therapeutic potential of VPA in a mouse model of normal tension glaucoma (NTG). Mice with glutamate/aspartate transporter gene deletion (GLAST KO mice) demonstrate progressive retinal ganglion cell (RGC) loss and optic nerve degeneration without elevated intraocular pressure, and exhibit glaucomatous pathology including glutamate neurotoxicity and oxidative stress in the retina. VPA (300mg/kg) or vehicle (PBS) was administered via intraperitoneal injection in GLAST KO mice daily for 2 weeks from the age of 3 weeks, which coincides with the onset of glaucomatous retinal degeneration. Following completion of the treatment period, the vehicle-treated GLAST KO mouse retina showed significant RGC death. Meanwhile, VPA treatment prevented RGC death and thinning of the inner retinal layer in GLAST KO mice. In addition, in vivo electrophysiological analyses demonstrated that visual impairment observed in vehicle-treated GLAST KO mice was ameliorated with VPA treatment, clearly establishing that VPA beneficially affects both histological and functional aspects of the glaucomatous retina. We found that VPA reduces oxidative stress induced in the GLAST KO retina and stimulates the cell survival signalling pathway associated with extracellular-signal-regulated kinases (ERK). This is the first study to report the neuroprotective effects of VPA in an animal model of NTG. Our findings raise intriguing possibilities that the widely prescribed drug VPA may be a novel candidate for treatment of glaucoma. Copyright © 2015 Elsevier

  19. Assessment of ANN and SVM models for estimating normal direct irradiation (H_b)

    International Nuclear Information System (INIS)

    Santos, Cícero Manoel dos; Escobedo, João Francisco; Teramoto, Érico Tadao; Modenese Gorla da Silva, Silvia Helena

    2016-01-01

    Highlights: • The performance of SVM and ANN in estimating Normal Direct Irradiation (H_b) was evaluated. • 12 models using different input variables are developed (hourly and daily partitions). • The most relevant input variables for DNI are kt, H_s_c and insolation ratio (r′ = n/N). • Support Vector Machine (SVM) provides accurate estimates and outperforms the Artificial Neural Network (ANN). - Abstract: This study evaluates the estimation of hourly and daily normal direct irradiation (H_b) using machine learning techniques (ML): Artificial Neural Network (ANN) and Support Vector Machine (SVM). Time series of different meteorological variables measured over thirteen years in Botucatu were used for training and validating ANN and SVM. Seven different sets of input variables were tested and evaluated, which were chosen based on statistical models reported in the literature. Relative Mean Bias Error (rMBE), Relative Root Mean Square Error (rRMSE), determination coefficient (R"2) and “d” Willmott index were used to evaluate ANN and SVM models. When compared to statistical models which use the same set of input variables (R"2 between 0.22 and 0.78), ANN and SVM show higher values of R"2 (hourly models between 0.52 and 0.88; daily models between 0.42 and 0.91). Considering the input variables, atmospheric transmissivity of global radiation (kt), integrated solar constant (H_s_c) and insolation ratio (n/N, n is sunshine duration and N is photoperiod) were the most relevant in ANN and SVM models. The rMBE and rRMSE values in the two time partitions of SVM models are lower than those obtained with ANN. Hourly ANN and SVM models have higher rRMSE values than daily models. Optimal performance with hourly models was obtained with ANN4"h (rMBE = 12.24%, rRMSE = 23.99% and “d” = 0.96) and SVM4"h (rMBE = 1.75%, rRMSE = 20.10% and “d” = 0.96). Optimal performance with daily models was obtained with ANN2"d (rMBE = −3.09%, rRMSE = 18.95% and “d” = 0

  20. Hypothyroidism after primary radiotherapy for head and neck squamous cell carcinoma: Normal tissue complication probability modeling with latent time correction

    DEFF Research Database (Denmark)

    Rønjom, Marianne Feen; Brink, Carsten; Bentzen, Søren

    2013-01-01

    To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors.......To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors....

  1. Improving weather predictability by including land-surface model parameter uncertainty

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  2. Collisional-radiative model including recombination processes for W27+ ion★

    Science.gov (United States)

    Murakami, Izumi; Sasaki, Akira; Kato, Daiji; Koike, Fumihiro

    2017-10-01

    We have constructed a collisional-radiative (CR) model for W27+ ions including 226 configurations with n ≤ 9 and ł ≤ 5 for spectroscopic diagnostics. We newly include recombination processes in the model and this is the first result of extreme ultraviolet spectrum calculated for recombining plasma component. Calculated spectra in 40-70 Å range in ionizing and recombining plasma components show similar 3 strong lines and 1 line weak in recombining plasma component at 45-50 Å and many weak lines at 50-65 Å for both components. Recombination processes do not contribute much to the spectrum at around 60 Å for W27+ ion. Dielectronic satellite lines are also minor contribution to the spectrum of recombining plasma component. Dielectronic recombination (DR) rate coefficient from W28+ to W27+ ions is also calculated with the same atomic data in the CR model. We found that larger set of energy levels including many autoionizing states gave larger DR rate coefficients but our rate agree within factor 6 with other works at electron temperature around 1 keV in which W27+ and W28+ ions are usually observed in plasmas. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, and Grzegorz Karwasz.

  3. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  4. Corticocortical feedback increases the spatial extent of normalization

    Science.gov (United States)

    Nassi, Jonathan J.; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T.

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a “normalization pool.” Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing. PMID:24910596

  5. A multilayer electro-thermal model of pouch battery during normal discharge and internal short circuit process

    International Nuclear Information System (INIS)

    Chen, Mingbiao; Bai, Fanfei; Song, Wenji; Lv, Jie; Lin, Shili

    2017-01-01

    Highlights: • 2D network equivalent circuit considers the interplay of cell units. • The temperature non-uniformity Φ of multilayer model is bigger than that of lumped model. • The temperature non-uniformity is quantified and the reason of non-uniformity is analyzed. • Increasing the thermal conductivity of the separator can effectively relieve the heat spot effect of ISC. - Abstract: As the electrical and thermal characteristic will affect the batteries’ safety, performance, calendar life and capacity fading, an electro-thermal coupled model for pouch battery LiFePO_4/C is developed in normal discharge and internal short circuit process. The battery is discretized into many cell elements which are united as a 2D network equivalent circuit. The electro-thermal model is solved with finite difference method. Non-uniformity of current distribution and temperature distribution is simulated and the result is validated with experiment data at various discharge rates. Comparison of the lumped model and the multilayer structure model shows that the temperature non-uniformity Φ of multilayer model is bigger than that of lumped model and shows more precise. The temperature non-uniformity is quantified and the reason of non-uniformity is analyzed. The electro-thermal model can also be used to guide the safety design of battery. The temperature of the ISC element near tabs is the highest because the equivalent resistance of the external circuit (not including the ISC element) is the smallest when the resistance of cell units is small. It is found that increasing the thermal conductivity of integrated layer can effectively relieve the heat spot effect of ISC.

  6. Parser Adaptation for Social Media by Integrating Normalization

    NARCIS (Netherlands)

    van der Goot, Rob; van Noord, Gerardus

    This work explores normalization for parser adaptation. Traditionally, normalization is used as separate pre-processing step. We show that integrating the normalization model into the parsing algorithm is beneficial. This way, multiple normalization candidates can be leveraged, which improves

  7. Differentiation of prostate cancer from normal prostate tissue in an animal model: conventional MRI and dynamic contrast-enhanced MRI

    International Nuclear Information System (INIS)

    Gemeinhardt, O.; Prochnow, D.; Taupitz, M.; Hamm, B.; Beyersdorff, D.; Luedemann, L.; Abramjuk, C.

    2005-01-01

    Purpose: to differentiate orthotopically implanted prostate cancer from normal prostate tissue using magnetic resonance imaging (MRI) and Gd-DTPA-BMA-enhanced dynamic MRI in the rat model. Material and methods: tumors were induced in 15 rats by orthotopic implantation of G subline Dunning rat prostatic tumor cells. MRI was performed 56 to 60 days after tumor cell implantation using T1-weighted spin-echo, T2-weighted turbo SE sequences, and a 2D FLASH sequence for the contrast medium based dynamic study. The interstitial leakage volume, normalized permeability and the permeability surface area product of tumor and healthy prostate were determined quantitatively using a pharmacokinetic model. The results were confirmed by histologic examination. Results: axial T2-weighted TSE images depicted low-intensity areas suspicious for tumor in all 15 animals. The mean tumor volume was 46.5 mm3. In the dynamic study, the suspicious areas in all animals displayed faster and more pronounced signal enhancement than surrounding prostate tissue. The interstitial volume and the permeability surface area product of the tumors increased significantly by 420% (p<0.001) and 424% (p<0.001), respectively, compared to normal prostate tissue, while no significant difference was seen for normalized permeability alone. Conclusion: the results of the present study demonstrate that quantitative analysis of contrast-enhanced dynamic MRI data enables differentiation of small, slowly growing orthotopic prostate cancer from normal prostate tissue in the rat model. (orig.)

  8. A compendium of canine normal tissue gene expression.

    Directory of Open Access Journals (Sweden)

    Joseph Briggs

    Full Text Available BACKGROUND: Our understanding of disease is increasingly informed by changes in gene expression between normal and abnormal tissues. The release of the canine genome sequence in 2005 provided an opportunity to better understand human health and disease using the dog as clinically relevant model. Accordingly, we now present the first genome-wide, canine normal tissue gene expression compendium with corresponding human cross-species analysis. METHODOLOGY/PRINCIPAL FINDINGS: The Affymetrix platform was utilized to catalogue gene expression signatures of 10 normal canine tissues including: liver, kidney, heart, lung, cerebrum, lymph node, spleen, jejunum, pancreas and skeletal muscle. The quality of the database was assessed in several ways. Organ defining gene sets were identified for each tissue and functional enrichment analysis revealed themes consistent with known physio-anatomic functions for each organ. In addition, a comparison of orthologous gene expression between matched canine and human normal tissues uncovered remarkable similarity. To demonstrate the utility of this dataset, novel canine gene annotations were established based on comparative analysis of dog and human tissue selective gene expression and manual curation of canine probeset mapping. Public access, using infrastructure identical to that currently in use for human normal tissues, has been established and allows for additional comparisons across species. CONCLUSIONS/SIGNIFICANCE: These data advance our understanding of the canine genome through a comprehensive analysis of gene expression in a diverse set of tissues, contributing to improved functional annotation that has been lacking. Importantly, it will be used to inform future studies of disease in the dog as a model for human translational research and provides a novel resource to the community at large.

  9. Environmental dose-assessment methods for normal operations at DOE nuclear sites

    International Nuclear Information System (INIS)

    Strenge, D.L.; Kennedy, W.E. Jr.; Corley, J.P.

    1982-09-01

    Methods for assessing public exposure to radiation from normal operations at DOE facilities are reviewed in this report. The report includes a discussion of environmental doses to be calculated, a review of currently available environmental pathway models and a set of recommended models for use when environmental pathway modeling is necessary. Currently available models reviewed include those used by DOE contractors, the Environmental Protection Agency (EPA), the Nuclear Regulatory Commission (NRC), and other organizations involved in environmental assessments. General modeling areas considered for routine releases are atmospheric transport, airborne pathways, waterborne pathways, direct exposure to penetrating radiation, and internal dosimetry. The pathway models discussed in this report are applicable to long-term (annual) uniform releases to the environment: they do not apply to acute releases resulting from accidents or emergency situations

  10. Methods and data for HTGR fuel performance and radionuclide release modeling during normal operation and accidents for safety analysis

    International Nuclear Information System (INIS)

    Verfondern, K.; Martin, R.C.; Moormann, R.

    1993-01-01

    The previous status report released in 1987 on reference data and calculation models for fission product transport in High-Temperature, Gas-Cooled Reactor (HTGR) safety analyses has been updated to reflect the current state of knowledge in the German HTGR program. The content of the status report has been expanded to include information from other national programs in HTGRs to provide comparative information on methods of analysis and the underlying database for fuel performance and fission product transport. The release and transport of fission products during normal operating conditions and during the accident scenarios of core heatup, water and air ingress, and depressurization are discussed. (orig.) [de

  11. A review of shear strength models for rock joints subjected to constant normal stiffness

    Directory of Open Access Journals (Sweden)

    Sivanathan Thirukumaran

    2016-06-01

    Full Text Available The typical shear behaviour of rough joints has been studied under constant normal load/stress (CNL boundary conditions, but recent studies have shown that this boundary condition may not replicate true practical situations. Constant normal stiffness (CNS is more appropriate to describe the stress–strain response of field joints since the CNS boundary condition is more realistic than CNL. The practical implications of CNS are movements of unstable blocks in the roof or walls of an underground excavation, reinforced rock wedges sliding in a rock slope or foundation, and the vertical movement of rock-socketed concrete piles. In this paper, the highlights and limitations of the existing models used to predict the shear strength/behaviour of joints under CNS conditions are discussed in depth.

  12. A finite element propagation model for extracting normal incidence impedance in nonprogressive acoustic wave fields

    Science.gov (United States)

    Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.

    1995-01-01

    A propagation model method for extracting the normal incidence impedance of an acoustic material installed as a finite length segment in a wall of a duct carrying a nonprogressive wave field is presented. The method recasts the determination of the unknown impedance as the minimization of the normalized wall pressure error function. A finite element propagation model is combined with a coarse/fine grid impedance plane search technique to extract the impedance of the material. Results are presented for three different materials for which the impedance is known. For each material, the input data required for the prediction scheme was computed from modal theory and then contaminated by random error. The finite element method reproduces the known impedance of each material almost exactly for random errors typical of those found in many measurement environments. Thus, the method developed here provides a means for determining the impedance of materials in a nonprogressirve wave environment such as that usually encountered in a commercial aircraft engine and most laboratory settings.

  13. Cardiovascular cast model fabrication and casting effectiveness evaluation in fetus with severe congenital heart disease or normal heart.

    Science.gov (United States)

    Wang, Yu; Cao, Hai-yan; Xie, Ming-xing; He, Lin; Han, Wei; Hong, Liu; Peng, Yuan; Hu, Yun-fei; Song, Ben-cai; Wang, Jing; Wang, Bin; Deng, Cheng

    2016-04-01

    To investigate the application and effectiveness of vascular corrosion technique in preparing fetal cardiovascular cast models, 10 normal fetal heart specimens with other congenital disease (control group) and 18 specimens with severe congenital heart disease (case group) from induced abortions were enrolled in this study from March 2013 to June 2015 in our hospital. Cast models were prepared by injecting casting material into vascular lumen to demonstrate real geometries of fetal cardiovascular system. Casting effectiveness was analyzed in terms of local anatomic structures and different anatomical levels (including overall level, atrioventricular and great vascular system, left-sided and right-sided heart), as well as different trimesters of pregnancy. In our study, all specimens were successfully casted. Casting effectiveness analysis of local anatomic structures showed a mean score from 1.90±1.45 to 3.60±0.52, without significant differences between case and control groups in most local anatomic structures except left ventricle, which had a higher score in control group (P=0.027). Inter-group comparison of casting effectiveness in different anatomical levels showed no significant differences between the two groups. Intra-group comparison also revealed undifferentiated casting effectiveness between atrioventricular and great vascular system, or left-sided and right-sided heart in corresponding group. Third-trimester group had a significantly higher perfusion score in great vascular system than second-trimester group (P=0.046), while the other anatomical levels displayed no such difference. Vascular corrosion technique can be successfully used in fabrication of fetal cardiovascular cast model. It is also a reliable method to demonstrate three-dimensional anatomy of severe congenital heart disease and normal heart in fetus.

  14. Metabolomics data normalization with EigenMS.

    Directory of Open Access Journals (Sweden)

    Yuliya V Karpievitch

    Full Text Available Liquid chromatography mass spectrometry has become one of the analytical platforms of choice for metabolomics studies. However, LC-MS metabolomics data can suffer from the effects of various systematic biases. These include batch effects, day-to-day variations in instrument performance, signal intensity loss due to time-dependent effects of the LC column performance, accumulation of contaminants in the MS ion source and MS sensitivity among others. In this study we aimed to test a singular value decomposition-based method, called EigenMS, for normalization of metabolomics data. We analyzed a clinical human dataset where LC-MS serum metabolomics data and physiological measurements were collected from thirty nine healthy subjects and forty with type 2 diabetes and applied EigenMS to detect and correct for any systematic bias. EigenMS works in several stages. First, EigenMS preserves the treatment group differences in the metabolomics data by estimating treatment effects with an ANOVA model (multiple fixed effects can be estimated. Singular value decomposition of the residuals matrix is then used to determine bias trends in the data. The number of bias trends is then estimated via a permutation test and the effects of the bias trends are eliminated. EigenMS removed bias of unknown complexity from the LC-MS metabolomics data, allowing for increased sensitivity in differential analysis. Moreover, normalized samples better correlated with both other normalized samples and corresponding physiological data, such as blood glucose level, glycated haemoglobin, exercise central augmentation pressure normalized to heart rate of 75, and total cholesterol. We were able to report 2578 discriminatory metabolite peaks in the normalized data (p<0.05 as compared to only 1840 metabolite signals in the raw data. Our results support the use of singular value decomposition-based normalization for metabolomics data.

  15. Simplification and Validation of a Spectral-Tensor Model for Turbulence Including Atmospheric Stability

    Science.gov (United States)

    Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.

    2018-02-01

    A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L , a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.

  16. Phenotype of normal spirometry in an aging population.

    Science.gov (United States)

    Vaz Fragoso, Carlos A; McAvay, Gail; Van Ness, Peter H; Casaburi, Richard; Jensen, Robert L; MacIntyre, Neil; Gill, Thomas M; Yaggi, H Klar; Concato, John

    2015-10-01

    In aging populations, the commonly used Global Initiative for Chronic Obstructive Lung Disease (GOLD) may misclassify normal spirometry as respiratory impairment (airflow obstruction and restrictive pattern), including the presumption of respiratory disease (chronic obstructive pulmonary disease [COPD]). To evaluate the phenotype of normal spirometry as defined by a new approach from the Global Lung Initiative (GLI), overall and across GOLD spirometric categories. Using data from COPDGene (n = 10,131; ages 45-81; smoking history, ≥10 pack-years), we evaluated spirometry and multiple phenotypes, including dyspnea severity (Modified Medical Research Council grade 0-4), health-related quality of life (St. George's Respiratory Questionnaire total score), 6-minute-walk distance, bronchodilator reversibility (FEV1 % change), computed tomography-measured percentage of lung with emphysema (% emphysema) and gas trapping (% gas trapping), and small airway dimensions (square root of the wall area for a standardized airway with an internal perimeter of 10 mm). Among 5,100 participants with GLI-defined normal spirometry, GOLD identified respiratory impairment in 1,146 (22.5%), including a restrictive pattern in 464 (9.1%), mild COPD in 380 (7.5%), moderate COPD in 302 (5.9%), and severe COPD in none. Overall, the phenotype of GLI-defined normal spirometry included normal adjusted mean values for dyspnea grade (0.8), St. George's Respiratory Questionnaire (15.9), 6-minute-walk distance (1,424 ft [434 m]), bronchodilator reversibility (2.7%), % emphysema (0.9%), % gas trapping (10.7%), and square root of the wall area for a standardized airway with an internal perimeter of 10 mm (3.65 mm); corresponding 95% confidence intervals were similarly normal. These phenotypes remained normal for GLI-defined normal spirometry across GOLD spirometric categories. GLI-defined normal spirometry, even when classified as respiratory impairment by GOLD, included adjusted mean values in the

  17. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  18. Group normalization for genomic data.

    Science.gov (United States)

    Ghandi, Mahmoud; Beer, Michael A

    2012-01-01

    Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  19. Modeling of normal contact of elastic bodies with surface relief taken into account

    Science.gov (United States)

    Goryacheva, I. G.; Tsukanov, I. Yu

    2018-04-01

    An approach to account the surface relief in normal contact problems for rough bodies on the basis of an additional displacement function for asperities is considered. The method and analytic expressions for calculating the additional displacement function for one-scale and two-scale wavy relief are presented. The influence of the microrelief geometric parameters, including the number of scales and asperities density, on additional displacements of the rough layer is analyzed.

  20. Modal analysis of inter-area oscillations using the theory of normal modes

    Energy Technology Data Exchange (ETDEWEB)

    Betancourt, R.J. [School of Electromechanical Engineering, University of Colima, Manzanillo, Col. 28860 (Mexico); Barocio, E. [CUCEI, University of Guadalajara, Guadalajara, Jal. 44480 (Mexico); Messina, A.R. [Graduate Program in Electrical Engineering, Cinvestav, Guadalajara, Jal. 45015 (Mexico); Martinez, I. [State Autonomous University of Mexico, Toluca, Edo. Mex. 50110 (Mexico)

    2009-04-15

    Based on the notion of normal modes in mechanical systems, a method is proposed for the analysis and characterization of oscillatory processes in power systems. The method is based on the property of invariance of modal subspaces and can be used to represent complex power system modal behavior by a set of decoupled, two-degree-of-freedom nonlinear oscillator equations. Using techniques from nonlinear mechanics, a new approach is outlined, for determining the normal modes (NMs) of motion of a general n-degree-of-freedom nonlinear system. Equations relating the normal modes and the physical velocities and displacements are developed from the linearized system model and numerical issues associated with the application of the technique are discussed. In addition to qualitative insight, this method can be utilized in the study of nonlinear behavior and bifurcation analyses. The application of these procedures is illustrated on a planning model of the Mexican interconnected system using a quadratic nonlinear model. Specifically, the use of normal mode analysis as a basis for identifying modal parameters, including natural frequencies and damping ratios of general, linear systems with n degrees of freedom is discussed. Comparisons to conventional linear analysis techniques demonstrate the ability of the proposed technique to extract the different oscillation modes embedded in the oscillation. (author)

  1. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    International Nuclear Information System (INIS)

    Krafft, S; Briere, T; Court, L; Martel, M

    2015-01-01

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  2. Extending Primitive Spatial Data Models to Include Semantics

    Science.gov (United States)

    Reitsma, F.; Batcheller, J.

    2009-04-01

    Our traditional geospatial data model involves associating some measurable quality, such as temperature, or observable feature, such as a tree, with a point or region in space and time. When capturing data we implicitly subscribe to some kind of conceptualisation. If we can make this explicit in an ontology and associate it with the captured data, we can leverage formal semantics to reason with the concepts represented in our spatial data sets. To do so, we extend our fundamental representation of geospatial data in a data model by including a URI in our basic data model that links it to our ontology defining our conceptualisation, We thus extend Goodchild et al's geo-atom [1] with the addition of a URI: (x, Z, z(x), URI) . This provides us with pixel or feature level knowledge and the ability to create layers of data from a set of pixels or features that might be drawn from a database based on their semantics. Using open source tools, we present a prototype that involves simple reasoning as a proof of concept. References [1] M.F. Goodchild, M. Yuan, and T.J. Cova. Towards a general theory of geographic representation in gis. International Journal of Geographical Information Science, 21(3):239-260, 2007.

  3. Normal Pressure Hydrocephalus

    Science.gov (United States)

    ... improves the chance of a good recovery. Without treatment, symptoms may worsen and cause death. What research is being done? The NINDS conducts and supports research on neurological disorders, including normal pressure hydrocephalus. Research on disorders such ...

  4. Making nuclear 'normal'

    International Nuclear Information System (INIS)

    Haehlen, Peter; Elmiger, Bruno

    2000-01-01

    The mechanics of the Swiss NPPs' 'come and see' programme 1995-1999 were illustrated in our contributions to all PIME workshops since 1996. Now, after four annual 'waves', all the country has been covered by the NPPs' invitation to dialogue. This makes PIME 2000 the right time to shed some light on one particular objective of this initiative: making nuclear 'normal'. The principal aim of the 'come and see' programme, namely to give the Swiss NPPs 'a voice of their own' by the end of the nuclear moratorium 1990-2000, has clearly been attained and was commented on during earlier PIMEs. It is, however, equally important that Swiss nuclear energy not only made progress in terms of public 'presence', but also in terms of being perceived as a normal part of industry, as a normal branch of the economy. The message that Swiss nuclear energy is nothing but a normal business involving normal people, was stressed by several components of the multi-prong campaign: - The speakers in the TV ads were real - 'normal' - visitors' guides and not actors; - The testimonials in the print ads were all real NPP visitors - 'normal' people - and not models; - The mailings inviting a very large number of associations to 'come and see' activated a typical channel of 'normal' Swiss social life; - Spending money on ads (a new activity for Swiss NPPs) appears to have resulted in being perceived by the media as a normal branch of the economy. Today we feel that the 'normality' message has well been received by the media. In the controversy dealing with antinuclear arguments brought forward by environmental organisations journalists nowadays as a rule give nuclear energy a voice - a normal right to be heard. As in a 'normal' controversy, the media again actively ask themselves questions about specific antinuclear claims, much more than before 1990 when the moratorium started. The result is that in many cases such arguments are discarded by journalists, because they are, e.g., found to be

  5. Toward a normalized clinical drug knowledge base in China-applying the RxNorm model to Chinese clinical drugs.

    Science.gov (United States)

    Wang, Li; Zhang, Yaoyun; Jiang, Min; Wang, Jingqi; Dong, Jiancheng; Liu, Yun; Tao, Cui; Jiang, Guoqian; Zhou, Yi; Xu, Hua

    2018-04-04

    In recent years, electronic health record systems have been widely implemented in China, making clinical data available electronically. However, little effort has been devoted to making drug information exchangeable among these systems. This study aimed to build a Normalized Chinese Clinical Drug (NCCD) knowledge base, by applying and extending the information model of RxNorm to Chinese clinical drugs. Chinese drugs were collected from 4 major resources-China Food and Drug Administration, China Health Insurance Systems, Hospital Pharmacy Systems, and China Pharmacopoeia-for integration and normalization in NCCD. Chemical drugs were normalized using the information model in RxNorm without much change. Chinese patent drugs (i.e., Chinese herbal extracts), however, were represented using an expanded RxNorm model to incorporate the unique characteristics of these drugs. A hybrid approach combining automated natural language processing technologies and manual review by domain experts was then applied to drug attribute extraction, normalization, and further generation of drug names at different specification levels. Lastly, we reported the statistics of NCCD, as well as the evaluation results using several sets of randomly selected Chinese drugs. The current version of NCCD contains 16 976 chemical drugs and 2663 Chinese patent medicines, resulting in 19 639 clinical drugs, 250 267 unique concepts, and 2 602 760 relations. By manual review of 1700 chemical drugs and 250 Chinese patent drugs randomly selected from NCCD (about 10%), we showed that the hybrid approach could achieve an accuracy of 98.60% for drug name extraction and normalization. Using a collection of 500 chemical drugs and 500 Chinese patent drugs from other resources, we showed that NCCD achieved coverages of 97.0% and 90.0% for chemical drugs and Chinese patent drugs, respectively. Evaluation results demonstrated the potential to improve interoperability across various electronic drug systems

  6. Improving normal tissue complication probability models: the need to adopt a "data-pooling" culture.

    Science.gov (United States)

    Deasy, Joseph O; Bentzen, Søren M; Jackson, Andrew; Ten Haken, Randall K; Yorke, Ellen D; Constine, Louis S; Sharma, Ashish; Marks, Lawrence B

    2010-03-01

    Clinical studies of the dependence of normal tissue response on dose-volume factors are often confusingly inconsistent, as the QUANTEC reviews demonstrate. A key opportunity to accelerate progress is to begin storing high-quality datasets in repositories. Using available technology, multiple repositories could be conveniently queried, without divulging protected health information, to identify relevant sources of data for further analysis. After obtaining institutional approvals, data could then be pooled, greatly enhancing the capability to construct predictive models that are more widely applicable and better powered to accurately identify key predictive factors (whether dosimetric, image-based, clinical, socioeconomic, or biological). Data pooling has already been carried out effectively in a few normal tissue complication probability studies and should become a common strategy. Copyright 2010 Elsevier Inc. All rights reserved.

  7. One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor

    International Nuclear Information System (INIS)

    Liu Lisheng; Zhang Qingjie; Zhai Pengcheng; Cao Dongfeng

    2008-01-01

    An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic and considering the effect of strain rate on the dynamic yield strength

  8. One Dimension Analytical Model of Normal Ballistic Impact on Ceramic/Metal Gradient Armor

    Science.gov (United States)

    Liu, Lisheng; Zhang, Qingjie; Zhai, Pengcheng; Cao, Dongfeng

    2008-02-01

    An analytical model of normal ballistic impact on the ceramic/metal gradient armor, which is based on modified Alekseevskii-Tate equations, has been developed. The process of gradient armour impacted by the long rod can be divided into four stages in this model. First stage is projectile's mass erosion or flowing phase, mushrooming phase and rigid phase; second one is the formation of comminuted ceramic conoid; third one is the penetration of gradient layer and last one is the penetration of metal back-up plate. The equations of third stage have been advanced by assuming the behavior of gradient layer as rigid-plastic and considering the effect of strain rate on the dynamic yield strength.

  9. Conceptualizing a Dynamic Fall Risk Model Including Intrinsic Risks and Exposures.

    Science.gov (United States)

    Klenk, Jochen; Becker, Clemens; Palumbo, Pierpaolo; Schwickert, Lars; Rapp, Kilan; Helbostad, Jorunn L; Todd, Chris; Lord, Stephen R; Kerse, Ngaire

    2017-11-01

    Falls are a major cause of injury and disability in older people, leading to serious health and social consequences including fractures, poor quality of life, loss of independence, and institutionalization. To design and provide adequate prevention measures, accurate understanding and identification of person's individual fall risk is important. However, to date, the performance of fall risk models is weak compared with models estimating, for example, cardiovascular risk. This deficiency may result from 2 factors. First, current models consider risk factors to be stable for each person and not change over time, an assumption that does not reflect real-life experience. Second, current models do not consider the interplay of individual exposure including type of activity (eg, walking, undertaking transfers) and environmental risks (eg, lighting, floor conditions) in which activity is performed. Therefore, we posit a dynamic fall risk model consisting of intrinsic risk factors that vary over time and exposure (activity in context). eHealth sensor technology (eg, smartphones) begins to enable the continuous measurement of both the above factors. We illustrate our model with examples of real-world falls from the FARSEEING database. This dynamic framework for fall risk adds important aspects that may improve understanding of fall mechanisms, fall risk models, and the development of fall prevention interventions. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  10. Hypothyroidism after primary radiotherapy for head and neck squamous cell carcinoma: Normal tissue complication probability modeling with latent time correction

    International Nuclear Information System (INIS)

    Rønjom, Marianne Feen; Brink, Carsten; Bentzen, Søren M.; Hegedüs, Laszlo; Overgaard, Jens; Johansen, Jørgen

    2013-01-01

    Background and purpose: To develop a normal tissue complication probability (NTCP) model of radiation-induced biochemical hypothyroidism (HT) after primary radiotherapy for head and neck squamous cell carcinoma (HNSCC) with adjustment for latency and clinical risk factors. Patients and methods: Patients with HNSCC receiving definitive radiotherapy with 66–68 Gy without surgery were followed up with serial post-treatment thyrotropin (TSH) assessment. HT was defined as TSH >4.0 mU/l. Data were analyzed with both a logistic and a mixture model (correcting for latency) to determine risk factors for HT and develop an NTCP model based on mean thyroid dose (MTD) and thyroid volume. Results: 203 patients were included. Median follow-up: 25.1 months. Five-year estimated risk of HT was 25.6%. In the mixture model, the only independent risk factors for HT were thyroid volume (cm 3 ) (OR = 0.75 [95% CI: 0.64–0.85], p 3 , respectively. Conclusions: Comparing the logistic and mixture models demonstrates the importance of latent-time correction in NTCP-modeling. Thyroid dose constraints in treatment planning should be individualized based on thyroid volume

  11. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  12. A Platoon Dispersion Model Based on a Truncated Normal Distribution of Speed

    Directory of Open Access Journals (Sweden)

    Ming Wei

    2012-01-01

    Full Text Available Understanding platoon dispersion is critical for the coordination of traffic signal control in an urban traffic network. Assuming that platoon speed follows a truncated normal distribution, ranging from minimum speed to maximum speed, this paper develops a piecewise density function that describes platoon dispersion characteristics as the platoon moves from an upstream to a downstream intersection. Based on this density function, the expected number of cars in the platoon that pass the downstream intersection, and the expected number of cars in the platoon that do not pass the downstream point are calculated. To facilitate coordination in a traffic signal control system, dispersion models for the front and the rear of the platoon are also derived. Finally, a numeric computation for the coordination of successive signals is presented to illustrate the validity of the proposed model.

  13. Analysis of normal-appearing white matter of multiple sclerosis by tensor-based two-compartment model of water diffusion

    International Nuclear Information System (INIS)

    Tachibana, Yasuhiko; Obata, Takayuki; Yoshida, Mariko; Hori, Masaaki; Kamagata, Koji; Suzuki, Michimasa; Fukunaga, Issei; Kamiya, Kouhei; Aoki, Shigeki; Yokoyama, Kazumasa; Hattori, Nobutaka; Inoue, Tomio

    2015-01-01

    To compare the significance of the two-compartment model, considering diffusional anisotropy with conventional diffusion analyzing methods regarding the detection of occult changes in normal-appearing white matter (NAWM) of multiple sclerosis (MS). Diffusion-weighted images (nine b-values with six directions) were acquired from 12 healthy female volunteers (22-52 years old, median 33 years) and 13 female MS patients (24-48 years old, median 37 years). Diffusion parameters based on the two-compartment model of water diffusion considering diffusional anisotropy was calculated by a proposed method. Other parameters including diffusion tensor imaging and conventional apparent diffusion coefficient (ADC) were also obtained. They were compared statistically between the control and MS groups. Diffusion of the slow diffusion compartment in the radial direction of neuron fibers was elevated in MS patients (0.121 x 10 -3 mm 2 /s) in comparison to control (0.100 x 10 -3 mm 2 /s), the difference being significant (P = 0.001). The difference between the groups was not significant in other comparisons, including conventional ADC and fractional anisotropy (FA) of diffusion tensor imaging. The proposed method was applicable to clinically acceptable small data. The parameters obtained by this method improved the detectability of occult changes in NAWM compared to the conventional methods. (orig.)

  14. Analysis of normal-appearing white matter of multiple sclerosis by tensor-based two-compartment model of water diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Tachibana, Yasuhiko [National Institute of Radiological Sciences, Research Center for Charged Particle Therapy, Chiba (Japan); Yokohama City University Graduate School of Medicine, Department of Radiology, Yokohama (Japan); Juntendo University School of Medicine, Department of Radiology, Tokyo (Japan); Obata, Takayuki [National Institute of Radiological Sciences, Research Center for Charged Particle Therapy, Chiba (Japan); Yoshida, Mariko; Hori, Masaaki; Kamagata, Koji; Suzuki, Michimasa; Fukunaga, Issei; Kamiya, Kouhei; Aoki, Shigeki [Juntendo University School of Medicine, Department of Radiology, Tokyo (Japan); Yokoyama, Kazumasa; Hattori, Nobutaka [Juntendo University School of Medicine, Department of Neurology, Tokyo (Japan); Inoue, Tomio [Yokohama City University Graduate School of Medicine, Department of Radiology, Yokohama (Japan)

    2015-06-01

    To compare the significance of the two-compartment model, considering diffusional anisotropy with conventional diffusion analyzing methods regarding the detection of occult changes in normal-appearing white matter (NAWM) of multiple sclerosis (MS). Diffusion-weighted images (nine b-values with six directions) were acquired from 12 healthy female volunteers (22-52 years old, median 33 years) and 13 female MS patients (24-48 years old, median 37 years). Diffusion parameters based on the two-compartment model of water diffusion considering diffusional anisotropy was calculated by a proposed method. Other parameters including diffusion tensor imaging and conventional apparent diffusion coefficient (ADC) were also obtained. They were compared statistically between the control and MS groups. Diffusion of the slow diffusion compartment in the radial direction of neuron fibers was elevated in MS patients (0.121 x 10{sup -3} mm{sup 2}/s) in comparison to control (0.100 x 10{sup -3} mm{sup 2}/s), the difference being significant (P = 0.001). The difference between the groups was not significant in other comparisons, including conventional ADC and fractional anisotropy (FA) of diffusion tensor imaging. The proposed method was applicable to clinically acceptable small data. The parameters obtained by this method improved the detectability of occult changes in NAWM compared to the conventional methods. (orig.)

  15. Mitochondrial base excision repair in mouse synaptosomes during normal aging and in a model of Alzheimer's disease

    DEFF Research Database (Denmark)

    Diaz, Ricardo Gredilla; Weissman, Lior; Yang, JL

    2012-01-01

    Brain aging is associated with synaptic decline and synaptic function is highly dependent on mitochondria. Increased levels of oxidative DNA base damage and accumulation of mitochondrial DNA (mtDNA) mutations or deletions lead to mitochondrial dysfunction, playing an important role in the aging...... process and the pathogenesis of several neurodegenerative diseases. Here we have investigated the repair of oxidative base damage, in synaptosomes of mouse brain during normal aging and in an AD model. During normal aging, a reduction in the base excision repair (BER) capacity was observed...... suggest that the age-related reduction in BER capacity in the synaptosomal fraction might contribute to mitochondrial and synaptic dysfunction during aging. The development of AD-like pathology in the 3xTgAD mouse model was, however, not associated with deficiencies of the BER mechanisms...

  16. Group normalization for genomic data.

    Directory of Open Access Journals (Sweden)

    Mahmoud Ghandi

    Full Text Available Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN, to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  17. The normal zone propagation in ATLAS B00 model coil

    CERN Document Server

    Boxman, E W; ten Kate, H H J

    2002-01-01

    The B00 model coil has been successfully tested in the ATLAS Magnet Test Facility at CERN. The coil consists of two double pancakes wound with aluminum stabilized cables of the barrel- and end-cap toroids conductors for the ATLAS detector. The magnet current is applied up to 24 kA and quenches are induced by firing point heaters. The normal zone velocity is measured over a wide range of currents by using pickup coils, voltage taps and superconducting quench detectors. The signals coming from various sensors are presented and analyzed. The results extracted from the various detection methods are in good agreement. It is found that the characteristic velocities vary from 5 to 20 m/s at 15 and 24 kA respectively. In addition, the minimum quench energies at different applied magnet currents are presented. (6 refs).

  18. Normal gravity field in relativistic geodesy

    Science.gov (United States)

    Kopeikin, Sergei; Vlasov, Igor; Han, Wen-Biao

    2018-02-01

    Modern geodesy is subject to a dramatic change from the Newtonian paradigm to Einstein's theory of general relativity. This is motivated by the ongoing advance in development of quantum sensors for applications in geodesy including quantum gravimeters and gradientometers, atomic clocks and fiber optics for making ultra-precise measurements of the geoid and multipolar structure of the Earth's gravitational field. At the same time, very long baseline interferometry, satellite laser ranging, and global navigation satellite systems have achieved an unprecedented level of accuracy in measuring 3-d coordinates of the reference points of the International Terrestrial Reference Frame and the world height system. The main geodetic reference standard to which gravimetric measurements of the of Earth's gravitational field are referred is a normal gravity field represented in the Newtonian gravity by the field of a uniformly rotating, homogeneous Maclaurin ellipsoid of which mass and quadrupole momentum are equal to the total mass and (tide-free) quadrupole moment of Earth's gravitational field. The present paper extends the concept of the normal gravity field from the Newtonian theory to the realm of general relativity. We focus our attention on the calculation of the post-Newtonian approximation of the normal field that is sufficient for current and near-future practical applications. We show that in general relativity the level surface of homogeneous and uniformly rotating fluid is no longer described by the Maclaurin ellipsoid in the most general case but represents an axisymmetric spheroid of the fourth order with respect to the geodetic Cartesian coordinates. At the same time, admitting a post-Newtonian inhomogeneity of the mass density in the form of concentric elliptical shells allows one to preserve the level surface of the fluid as an exact ellipsoid of rotation. We parametrize the mass density distribution and the level surface with two parameters which are

  19. Normal mode approach to modelling of feedback stabilization of the resistive wall mode

    International Nuclear Information System (INIS)

    Chu, M.S.; Chance, M.S.; Okabayashi, M.; Glasser, A.H.

    2003-01-01

    Feedback stabilization of the resistive wall mode (RWM) of a plasma in a general feedback configuration is formulated in terms of the normal modes of the plasma-resistive wall system. The growth/damping rates and the eigenfunctions of the normal modes are determined by an extended energy principle for the plasma during its open (feedback) loop operation. A set of equations are derived for the time evolution of these normal modes with currents in the feedback coils. The dynamics of the feedback system is completed by the prescription of the feedback logic. The feasibility of the feedback is evaluated by using the Nyquist diagram method or by solving the characteristic equations. The elements of the characteristic equations are formed from the growth and damping rates of the normal modes, the sensor matrix of the perturbation fluxes detected by the sensor loops, the excitation matrix of the energy input to the normal modes by the external feedback coils, and the feedback logic. (The RWM is also predicted to be excited by an external error field to a large amplitude when it is close to marginal stability.) This formulation has been implemented numerically and applied to the DIII-D tokamak. It is found that feedback with poloidal sensors is much more effective than feedback with radial sensors. Using radial sensors, increasing the number of feedback coils from a central band on the outboard side to include an upper and a lower band can substantially increase the effectiveness of the feedback system. The strength of the RWM that can be stabilized is increased from γτ w = 1 to 30 (γ is the growth rate of the RWM in the absence of feedback and τ w is the resistive wall time constant) Using poloidal sensors, just one central band of feedback coils is sufficient for the stabilization of the RWM with γτ w = 30. (author)

  20. Migration using a transversely isotropic medium with symmetry normal to the reflector dip

    KAUST Repository

    Alkhalifah, Tariq Ali; Sava, P.

    2011-01-01

    A transversely isotropic (TI) model in which the tilt is constrained to be normal to the dip (DTI model) allows for simplifications in the imaging and velocity model building efforts as compared to a general TI (TTI) model. Although this model cannot be represented physically in all situations, for example, in the case of conflicting dips, it handles arbitrary reflector orientations under the assumption of symmetry axis normal to the dip. Using this assumption, we obtain efficient downward continuation algorithms compared to the general TTI ones, by utilizing the reflection features of such a model. Phase-shift migration can be easily extended to approximately handle lateral inhomogeneity using, for example, the split-step approach. This is possible because, unlike the general TTI case, the DTI model reduces to VTI for zero dip. These features enable a process in which we can extract velocity information by including tools that expose inaccuracies in the velocity model in the downward continuation process. We test this model on synthetic data corresponding to a general TTI medium and show its resilience. 2011 Tariq Alkhalifah and Paul Sava.

  1. Grand unified models including extra Z bosons

    International Nuclear Information System (INIS)

    Li Tiezhong

    1989-01-01

    The grand unified theories (GUT) of the simple Lie groups including extra Z bosons are discussed. Under authors's hypothesis there are only SU 5+m SO 6+4n and E 6 groups. The general discussion of SU 5+m is given, then the SU 6 and SU 7 are considered. In SU 6 the 15+6 * +6 * fermion representations are used, which are not same as others in fermion content, Yukawa coupling and broken scales. A conception of clans of particles, which are not families, is suggested. These clans consist of extra Z bosons and the corresponding fermions of the scale. The all of fermions in the clans are down quarks except for the standard model which consists of Z bosons and 15 fermions, therefore, the spectrum of the hadrons which are composed of these down quarks are different from hadrons at present

  2. Modeling of cylindrical surrounding gate MOSFETs including the fringing field effects

    International Nuclear Information System (INIS)

    Gupta, Santosh K.; Baishya, Srimanta

    2013-01-01

    A physically based analytical model for surface potential and threshold voltage including the fringing gate capacitances in cylindrical surround gate (CSG) MOSFETs has been developed. Based on this a subthreshold drain current model has also been derived. This model first computes the charge induced in the drain/source region due to the fringing capacitances and considers an effective charge distribution in the cylindrically extended source/drain region for the development of a simple and compact model. The fringing gate capacitances taken into account are outer fringe capacitance, inner fringe capacitance, overlap capacitance, and sidewall capacitance. The model has been verified with the data extracted from 3D TCAD simulations of CSG MOSFETs and was found to be working satisfactorily. (semiconductor devices)

  3. Used Nuclear Fuel Loading and Structural Performance Under Normal Conditions of Transport - Modeling, Simulation and Experimental Integration RD&D Plan

    Energy Technology Data Exchange (ETDEWEB)

    Adkins, Harold E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-04-01

    Under current U.S. Nuclear Regulatory Commission regulation, it is not sufficient for used nuclear fuel (UNF) to simply maintain its integrity during the storage period, it must maintain its integrity in such a way that it can withstand the physical forces of handling and transportation associated with restaging the fuel and moving it to treatment or recycling facilities, or a geologic repository. Hence it is necessary to understand the performance characteristics of aged UNF cladding and ancillary components under loadings stemming from transport initiatives. Researchers would like to demonstrate that enough information, including experimental support and modeling and simulation capabilities, exists to establish a preliminary determination of UNF structural performance under normal conditions of transport (NCT). This research, development and demonstration (RD&D) plan describes a methodology, including development and use of analytical models, to evaluate loading and associated mechanical responses of UNF rods and key structural components. This methodology will be used to provide a preliminary assessment of the performance characteristics of UNF cladding and ancillary components under rail-related NCT loading. The methodology couples modeling and simulation and experimental efforts currently under way within the Used Fuel Disposition Campaign (UFDC). The methodology will involve limited uncertainty quantification in the form of sensitivity evaluations focused around available fuel and ancillary fuel structure properties exclusively. The work includes collecting information via literature review, soliciting input/guidance from subject matter experts, performing computational analyses, planning experimental measurement and possible execution (depending on timing), and preparing a variety of supporting documents that will feed into and provide the basis for future initiatives. The methodology demonstration will focus on structural performance evaluation of

  4. Prevalence, Risk Factors, and Outcome of Myocardial Infarction with Angiographically Normal and Near-Normal Coronary Arteries: A Systematic Review and Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Samad Ghaffari

    2016-12-01

    Full Text Available Context: Coronary artery diseases are mostly detected using angiographic methods demonstrating arteries status. Nevertheless, Myocardial Infarction (MI may occur in the presence of angiographically normal coronary arteries. Therefore, this study aimed to investigate the prevalence of MI with normal angiography and its possible etiologies in a systematic review. Evidence Acquisition: In this meta-analysis, the required data were collected from PubMed, Science Direct, Google Scholar, Scopus, Magiran, Scientific Information Database, and Medlib databases using the following keywords: “coronary angiograph”, “normal coronary arteries”, “near-normal coronary arteries”, “heart diseases”, “coronary artery disease”, “coronary disease”, “cardiac troponin I”, “Myocardial infarction”, “risk factor”, “prevalence”, “outcome”, and their Persian equivalents. Then, Comprehensive Meta-Analysis software, version 2 using randomized model was employed to determine the prevalence of each complication and perform the meta-analysis. P values less than 0.05 were considered to be statistically significant. Results: Totally, 20 studies including 139957 patients were entered into the analysis. The patients’ mean age was 47.62 ± 6.63 years and 64.4% of the patients were male. The prevalence of MI with normal or near-normal coronary arteries was 3.5% (CI = 95%, min = 2.2%, and max = 5.7%. Additionally, smoking and family history of cardiovascular diseases were the most important risk factors. The results showed no significant difference between MIs with normal angiography and 1- or 2-vessel involvement regarding the frequency of major adverse cardiac events (5.4% vs. 7.3%, P = 0.32. However, a significant difference was found between the patients with normal angiography and those with 3-vessel involvement in this regard (5.4% vs. 20.2%, P < 0.001. Conclusions: Although angiographic studies are required to assess the underlying

  5. Normal modes of weak colloidal gels

    Science.gov (United States)

    Varga, Zsigmond; Swan, James W.

    2018-01-01

    The normal modes and relaxation rates of weak colloidal gels are investigated in calculations using different models of the hydrodynamic interactions between suspended particles. The relaxation spectrum is computed for freely draining, Rotne-Prager-Yamakawa, and accelerated Stokesian dynamics approximations of the hydrodynamic mobility in a normal mode analysis of a harmonic network representing several colloidal gels. We find that the density of states and spatial structure of the normal modes are fundamentally altered by long-ranged hydrodynamic coupling among the particles. Short-ranged coupling due to hydrodynamic lubrication affects only the relaxation rates of short-wavelength modes. Hydrodynamic models accounting for long-ranged coupling exhibit a microscopic relaxation rate for each normal mode, λ that scales as l-2, where l is the spatial correlation length of the normal mode. For the freely draining approximation, which neglects long-ranged coupling, the microscopic relaxation rate scales as l-γ, where γ varies between three and two with increasing particle volume fraction. A simple phenomenological model of the internal elastic response to normal mode fluctuations is developed, which shows that long-ranged hydrodynamic interactions play a central role in the viscoelasticity of the gel network. Dynamic simulations of hard spheres that gel in response to short-ranged depletion attractions are used to test the applicability of the density of states predictions. For particle concentrations up to 30% by volume, the power law decay of the relaxation modulus in simulations accounting for long-ranged hydrodynamic interactions agrees with predictions generated by the density of states of the corresponding harmonic networks as well as experimental measurements. For higher volume fractions, excluded volume interactions dominate the stress response, and the prediction from the harmonic network density of states fails. Analogous to the Zimm model in polymer

  6. A roller chain drive model including contact with guide-bars

    DEFF Research Database (Denmark)

    Pedersen, Sine Leergaard; Hansen, John Michael; Ambrósio, J. A. C.

    2004-01-01

    A model of a roller chain drive is developed and applied to the simulation and analysis of roller chain drives of large marine diesel engines. The model includes the impact with guide-bars that are the motion delimiter components on the chain strands between the sprockets. The main components...... and the sprocket centre, i.e. a constraint is added when such distance is less than the pitch radius. The unilateral kinematic constraint is removed when its associated constraint reaction force, applied on the roller, is in the direction of the root of the sprocket teeth. In order to improve the numerical...

  7. A constitutive model for the forces of a magnetic bearing including eddy currents

    Science.gov (United States)

    Taylor, D. L.; Hebbale, K. V.

    1993-01-01

    A multiple magnet bearing can be developed from N individual electromagnets. The constitutive relationships for a single magnet in such a bearing is presented. Analytical expressions are developed for a magnet with poles arranged circumferencially. Maxwell's field equations are used so the model easily includes the effects of induced eddy currents due to the rotation of the journal. Eddy currents must be included in any dynamic model because they are the only speed dependent parameter and may lead to a critical speed for the bearing. The model is applicable to bearings using attraction or repulsion.

  8. Including policy and management in socio-hydrology models: initial conceptualizations

    Science.gov (United States)

    Hermans, Leon; Korbee, Dorien

    2017-04-01

    Socio-hydrology studies the interactions in coupled human-water systems. So far, the use of dynamic models that capture the direct feedback between societal and hydrological systems has been dominant. What has not yet been included with any particular emphasis, is the policy or management layer, which is a central element in for instance integrated water resources management (IWRM) or adaptive delta management (ADM). Studying the direct interactions between human-water systems generates knowledges that eventually helps influence these interactions in ways that may ensure better outcomes - for society and for the health and sustainability of water systems. This influence sometimes occurs through spontaneous emergence, uncoordinated by societal agents - private sector, citizens, consumers, water users. However, the term 'management' in IWRM and ADM also implies an additional coordinated attempt through various public actors. This contribution is a call to include the policy and management dimension more prominently into the research focus of the socio-hydrology field, and offers first conceptual variables that should be considered in attempts to include this policy or management layer in socio-hydrology models. This is done by drawing on existing frameworks to study policy processes throughout both planning and implementation phases. These include frameworks such as the advocacy coalition framework, collective learning and policy arrangements, which all emphasis longer-term dynamics and feedbacks between actor coalitions in strategic planning and implementation processes. A case about longter-term dynamics in the management of the Haringvliet in the Netherlands is used to illustrate the paper.

  9. Normalized Mini-Mental State Examination for assessing cognitive change in population-based brain aging studies.

    Science.gov (United States)

    Philipps, Viviane; Amieva, Hélène; Andrieu, Sandrine; Dufouil, Carole; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène; Proust-Lima, Cécile

    2014-01-01

    The Mini-Mental State Examination (MMSE) is widely used in population-based longitudinal studies to quantify cognitive change. However, its poor metrological properties, mainly ceiling/floor effects and varying sensitivity to change, have largely restricted its usefulness. We propose a normalizing transformation that corrects these properties, and makes possible the use of standard statistical methods to analyze change in MMSE scores. The normalizing transformation designed to correct at best the metrological properties of MMSE was estimated and validated on two population-based studies (n = 4,889, 20-year follow-up) by cross-validation. The transformation was also validated on two external studies with heterogeneous samples mixing normal and pathological aging, and samples including only demented subjects. The normalizing transformation provided correct inference in contrast with models analyzing the change in crude MMSE that most often lead to biased estimates of risk factors and incorrect conclusions. Cognitive change can be easily and properly assessed with the normalized MMSE using standard statistical methods such as linear (mixed) models. © 2014 S. Karger AG, Basel.

  10. A hydrodynamic model for granular material flows including segregation effects

    Science.gov (United States)

    Gilberg, Dominik; Klar, Axel; Steiner, Konrad

    2017-06-01

    The simulation of granular flows including segregation effects in large industrial processes using particle methods is accurate, but very time-consuming. To overcome the long computation times a macroscopic model is a natural choice. Therefore, we couple a mixture theory based segregation model to a hydrodynamic model of Navier-Stokes-type, describing the flow behavior of the granular material. The granular flow model is a hybrid model derived from kinetic theory and a soil mechanical approach to cover the regime of fast dilute flow, as well as slow dense flow, where the density of the granular material is close to the maximum packing density. Originally, the segregation model has been formulated by Thornton and Gray for idealized avalanches. It is modified and adapted to be in the preferred form for the coupling. In the final coupled model the segregation process depends on the local state of the granular system. On the other hand, the granular system changes as differently mixed regions of the granular material differ i.e. in the packing density. For the modeling process the focus lies on dry granular material flows of two particle types differing only in size but can be easily extended to arbitrary granular mixtures of different particle size and density. To solve the coupled system a finite volume approach is used. To test the model the rotational mixing of small and large particles in a tumbler is simulated.

  11. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    Science.gov (United States)

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  12. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  13. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    OpenAIRE

    Miroslaw Luft; Elzbieta Szychta

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  14. Quasi-normal modes from non-commutative matrix dynamics

    Science.gov (United States)

    Aprile, Francesco; Sanfilippo, Francesco

    2017-09-01

    We explore similarities between the process of relaxation in the BMN matrix model and the physics of black holes in AdS/CFT. Focusing on Dyson-fluid solutions of the matrix model, we perform numerical simulations of the real time dynamics of the system. By quenching the equilibrium distribution we study quasi-normal oscillations of scalar single trace observables, we isolate the lowest quasi-normal mode, and we determine its frequencies as function of the energy. Considering the BMN matrix model as a truncation of N=4 SYM, we also compute the frequencies of the quasi-normal modes of the dual scalar fields in the AdS5-Schwarzschild background. We compare the results, and we finda surprising similarity.

  15. Mathematical model of thyristor inverter including a series-parallel resonant circuit

    OpenAIRE

    Luft, M.; Szychta, E.

    2008-01-01

    The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with the aid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  16. U.S. Monthly Climate Normals (1971-2000)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — U.S. Monthly Climate Normals (1971-2000) (DSI-9641C) include climatological normals based on monthly maximum, minimum, and mean temperature and monthly total...

  17. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Directory of Open Access Journals (Sweden)

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  18. Launch Lock Assemblies Including Axial Gap Amplification Devices and Spacecraft Isolation Systems Including the Same

    Science.gov (United States)

    Barber, Tim Daniel (Inventor); Hindle, Timothy (Inventor); Young, Ken (Inventor); Davis, Torey (Inventor)

    2014-01-01

    Embodiments of a launch lock assembly are provided, as are embodiments of a spacecraft isolation system including one or more launch lock assemblies. In one embodiment, the launch lock assembly includes first and second mount pieces, a releasable clamp device, and an axial gap amplification device. The releasable clamp device normally maintains the first and second mount pieces in clamped engagement; and, when actuated, releases the first and second mount pieces from clamped engagement to allow relative axial motion there between. The axial gap amplification device normally residing in a blocking position wherein the gap amplification device obstructs relative axial motion between the first and second mount pieces. The axial gap amplification device moves into a non-blocking position when the first and second mount pieces are released from clamped engagement to increase the range of axial motion between the first and second mount pieces.

  19. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    Science.gov (United States)

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  20. Progress Towards an LES Wall Model Including Unresolved Roughness

    Science.gov (United States)

    Craft, Kyle; Redman, Andrew; Aikens, Kurt

    2015-11-01

    Wall models used in large eddy simulations (LES) are often based on theories for hydraulically smooth walls. While this is reasonable for many applications, there are also many where the impact of surface roughness is important. A previously developed wall model has been used primarily for jet engine aeroacoustics. However, jet simulations have not accurately captured thick initial shear layers found in some experimental data. This may partly be due to nozzle wall roughness used in the experiments to promote turbulent boundary layers. As a result, the wall model is extended to include the effects of unresolved wall roughness through appropriate alterations to the log-law. The methodology is tested for incompressible flat plate boundary layers with different surface roughness. Correct trends are noted for the impact of surface roughness on the velocity profile. However, velocity deficit profiles and the Reynolds stresses do not collapse as well as expected. Possible reasons for the discrepancies as well as future work will be presented. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  1. Towards an Explanation of Overeating Patterns Among Normal Weight College Women: Development and Validation of a Structural Equation Model

    OpenAIRE

    Russ, Christine Runyan II

    1998-01-01

    Although research describing relationships between psychosocial factors and various eating patterns is growing, a model which explains the mechanisms through which these factors may operate is lacking. A model to explain overeating patterns among normal weight college females was developed and tested. The model contained the following variables: global adjustment, eating and weight cognitions, emotional eating, and self-efficacy. Three hundred ninety-o...

  2. Mathematical Model of Thyristor Inverter Including a Series-parallel Resonant Circuit

    Directory of Open Access Journals (Sweden)

    Miroslaw Luft

    2008-01-01

    Full Text Available The article presents a mathematical model of thyristor inverter including a series-parallel resonant circuit with theaid of state variable method. Maple procedures are used to compute current and voltage waveforms in the inverter.

  3. About normal distribution on SO(3) group in texture analysis

    Science.gov (United States)

    Savyolova, T. I.; Filatov, S. V.

    2017-12-01

    This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.

  4. Assessment of risks of accidents and normal operation at nuclear power plants

    International Nuclear Information System (INIS)

    Savolainen, Ilkka; Vuori, Seppo.

    1977-01-01

    A probabilistic assessment model for the analysis of risks involved in the operation of nuclear power plants is described. With the computer code ARANO it is possible to estimate the health and economic consequences of reactor accidents both in probabilistic and deterministic sense. In addition the code is applicable to the calculation of individual and collective doses caused by the releases during normal operation. The estimation of release probabilities and magnitudes is not included in the model. (author)

  5. Quantitative Analysis of Torso FDG-PET Scans by Using Anatomical Standardization of Normal Cases from Thorough Physical Examinations.

    Directory of Open Access Journals (Sweden)

    Takeshi Hara

    Full Text Available Understanding of standardized uptake value (SUV of 2-deoxy-2-[18F]fluoro-d-glucose positron emission tomography (FDG-PET depends on the background accumulations of glucose because the SUV often varies the status of patients. The purpose of this study was to develop a new method for quantitative analysis of SUV of FDG-PET scan images. The method included an anatomical standardization and a statistical comparison with normal cases by using Z-score that are often used in SPM or 3D-SSP approach for brain function analysis. Our scheme consisted of two approaches, which included the construction of a normal model and the determination of the SUV scores as Z-score index for measuring the abnormality of an FDG-PET scan image. To construct the normal torso model, all of the normal images were registered into one shape, which indicated the normal range of SUV at all voxels. The image deformation process consisted of a whole body rigid registration of shoulder to bladder region and liver registration and a non-linear registration of body surface by using the thin-plate spline technique. In order to validate usefulness of our method, we segment suspicious regions on FDG-PET images manually, and obtained the Z-scores of the regions based on the corresponding voxels that stores the mean and the standard deviations from the normal model. We collected 243 (143 males and 100 females normal cases to construct the normal model. We also extracted 432 abnormal spots from 63 abnormal cases (73 cancer lesions to validate the Z-scores. The Z-scores of 417 out of 432 abnormal spots were higher than 2.0, which statistically indicated the severity of the spots. In conclusions, the Z-scores obtained by our computerized scheme with anatomical standardization of torso region would be useful for visualization and detection of subtle lesions on FDG-PET scan images even when the SUV may not clearly show an abnormality.

  6. Spatially tuned normalization explains attention modulation variance within neurons.

    Science.gov (United States)

    Ni, Amy M; Maunsell, John H R

    2017-09-01

    Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical

  7. Multi-model Analysis of Diffusion-weighted Imaging of Normal Testes at 3.0 T: Preliminary Findings.

    Science.gov (United States)

    Min, Xiangde; Feng, Zhaoyan; Wang, Liang; Cai, Jie; Li, Basen; Ke, Zan; Zhang, Peipei; You, Huijuan; Yan, Xu

    2018-04-01

    This study aimed to establish diffusion quantitative parameters (apparent diffusion coefficient [ADC], DDC, α, D app , and K app ) in normal testes at 3.0 T. Sixty-four healthy volunteers in two age groups (A: 10-39 years; B: ≥ 40 years) underwent diffusion-weighted imaging scanning at 3.0 T. ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app were calculated using the mono-exponential, stretched-exponential, and kurtosis models. The correlations between parameters and the age were analyzed. The parameters were compared between the age groups and between the right and the left testes. The average ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app values did not significantly differ between the right and the left testes (P > .05 for all). The following significant correlations were found: positive correlations between age and testicular ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app (r = 0.516, 0.518, 0.518, 0.521, and 0.516, respectively; P < .01 for all) and negative correlations between age and testicular α and K app (r = -0.363, -0.427, respectively; P < .01 for both). Compared to group B, in group A, ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app were significantly lower (P < .05 for all), but α and K app were significantly higher (P < .05 for both). Our study demonstrated the applicability of the testicular mono-exponential, stretched-exponential, and kurtosis models. Our results can help establish a baseline for the normal testicular parameters in these diffusion models. The contralateral normal testis can serve as a suitable reference for evaluating the abnormalities of the other side. The effect of age on these parameters requires further attention. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS

    Science.gov (United States)

    Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.

    2010-12-01

    With 4 million ha currently grown for ethanol in Brazil only, approximately half the global bioethanol production in 2005 (Smeets 2008), and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Indeed, ethanol made from biomass is currently the most widespread option for alternative transportation fuels. It was originally promoted as a carbon neutral energy resource that could bring energy independence to countries and local opportunities to farmers, until attention was drawn to its environmental and socio-economical drawbacks. It is still not clear to which extent it is a solution or a contributor to climate change mitigation. Dynamic Global Vegetation models can help address these issues and quantify the potential impacts of biofuels on ecosystems at scales ranging from on-site to global. The global agro-ecosystem model ORCHIDEE describes water, carbon and energy exchanges at the soil-atmosphere interface for a limited number of natural and agricultural vegetation types. In order to integrate agricultural management to the simulations and to capture more accurately the specificity of crops' phenology, ORCHIDEE has been coupled with the agronomical model STICS. The resulting crop-oriented vegetation model ORCHIDEE-STICS has been used so far to simulate temperate crops such as wheat, corn and soybean. As a generic ecosystem model, each grid cell can include several vegetation types with their own phenology and management practices, making it suitable to spatial simulations. Here, ORCHIDEE-STICS is altered to include sugar cane as a new agricultural Plant functional Type, implemented and parametrized using the STICS approach. An on-site calibration and validation is then performed based on biomass and flux chamber measurements in several sites in Australia and variables such as LAI, dry weight, heat fluxes and respiration are used to evaluate the ability of the model to simulate the specific

  9. Normal people working in normal organizations with normal equipment: system safety and cognition in a mid-air collision.

    Science.gov (United States)

    de Carvalho, Paulo Victor Rodrigues; Gomes, José Orlando; Huber, Gilbert Jacob; Vidal, Mario Cesar

    2009-05-01

    A fundamental challenge in improving the safety of complex systems is to understand how accidents emerge in normal working situations, with equipment functioning normally in normally structured organizations. We present a field study of the en route mid-air collision between a commercial carrier and an executive jet, in the clear afternoon Amazon sky in which 154 people lost their lives, that illustrates one response to this challenge. Our focus was on how and why the several safety barriers of a well structured air traffic system melted down enabling the occurrence of this tragedy, without any catastrophic component failure, and in a situation where everything was functioning normally. We identify strong consistencies and feedbacks regarding factors of system day-to-day functioning that made monitoring and awareness difficult, and the cognitive strategies that operators have developed to deal with overall system behavior. These findings emphasize the active problem-solving behavior needed in air traffic control work, and highlight how the day-to-day functioning of the system can jeopardize such behavior. An immediate consequence is that safety managers and engineers should review their traditional safety approach and accident models based on equipment failure probability, linear combinations of failures, rules and procedures, and human errors, to deal with complex patterns of coincidence possibilities, unexpected links, resonance among system functions and activities, and system cognition.

  10. Advancing Normal Birth: Organizations, Goals, and Research

    OpenAIRE

    Hotelling, Barbara A.; Humenick, Sharron S.

    2005-01-01

    In this column, the support for advancing normal birth is summarized, based on a comparison of the goals of Healthy People 2010, Lamaze International, the Coalition for Improving Maternity Services, and the midwifery model of care. Research abstracts are presented to provide evidence that the midwifery model of care safely and economically advances normal birth. Rates of intervention experienced, as reported in the Listening to Mothers survey, are compared to the forms of care recommended by ...

  11. Nonlinear dynamics exploration through normal forms

    CERN Document Server

    Kahn, Peter B

    2014-01-01

    Geared toward advanced undergraduates and graduate students, this exposition covers the method of normal forms and its application to ordinary differential equations through perturbation analysis. In addition to its emphasis on the freedom inherent in the normal form expansion, the text features numerous examples of equations, the kind of which are encountered in many areas of science and engineering. The treatment begins with an introduction to the basic concepts underlying the normal forms. Coverage then shifts to an investigation of systems with one degree of freedom that model oscillations

  12. MR images of optic nerve compression by the intracranial carotid artery. Including the patients with normal tension glaucoma

    International Nuclear Information System (INIS)

    Kurokawa, Hiroaki; Kin, Kiyonori; Arichi, Miwa; Ogata, Nahoko; Shimizu, Ken; Akai, Mikio; Ikeda, Koshi; Sawada, Satoshi; Matsumura, Miyo

    2003-01-01

    Twenty-one eyes of 12 patients with MRI-defined optic nerve compression by the intracranial carotid artery were examined to investigate whether the visual field defects result from optic nerve compression or other causes. In 4 affected eyes with 2 patients, we could not distinguish whether the visual field defects were due to optic nerve compression or normal-tension glaucoma. These patients had evidence of glaucoma-like cupping of the optic disc and visual field defects. Nine affected eyes with 7 patients were diagnosed as having compressive optic neuropathy due to unilateral optic nerve compression associated with visual field defects or non-glaucomatous visual field defects. Four of 9 affected eyes were associated with optic disc cupping of various degrees. We suggest that the glaucoma-like visual field defects and optic disc cupping may result from a compressive lesion of the anterior visual pathway. Frequently, this feature caused confusion in the differential diagnosis between optic nerve compression by carotid artery and normal-tension glaucoma. (author)

  13. Clarifying the use of aggregated exposures in multilevel models: self-included vs. self-excluded measures.

    Directory of Open Access Journals (Sweden)

    Etsuji Suzuki

    Full Text Available Multilevel analyses are ideally suited to assess the effects of ecological (higher level and individual (lower level exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure. More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure. In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models.Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models-self-included model and self-excluded model-and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure.Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions.When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model-self-included or self-excluded-is suitable for a given situation

  14. Are your covariates under control? How normalization can re-introduce covariate effects.

    Science.gov (United States)

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  15. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  16. Normal IQ is possible in Smith-Lemli-Opitz syndrome.

    Science.gov (United States)

    Eroglu, Yasemen; Nguyen-Driver, Mina; Steiner, Robert D; Merkens, Louise; Merkens, Mark; Roullet, Jean-Baptiste; Elias, Ellen; Sarphare, Geeta; Porter, Forbes D; Li, Chumei; Tierney, Elaine; Nowaczyk, Małgorzata J; Freeman, Kurt A

    2017-08-01

    Children with Smith-Lemli-Opitz syndrome (SLOS) are typically reported to have moderate to severe intellectual disability. This study aims to determine whether normal cognitive function is possible in this population and to describe clinical, biochemical and molecular characteristics of children with SLOS and normal intelligent quotient (IQ). The study included children with SLOS who underwent cognitive testing in four centers. All children with at least one IQ composite score above 80 were included in the study. Six girls, three boys with SLOS were found to have normal or low-normal IQ in a cohort of 145 children with SLOS. Major/multiple organ anomalies and low serum cholesterol levels were uncommon. No correlation with IQ and genotype was evident and no specific developmental profile were observed. Thus, normal or low-normal cognitive function is possible in SLOS. Further studies are needed to elucidate factors contributing to normal or low-normal cognitive function in children with SLOS. © 2017 Wiley Periodicals, Inc.

  17. Ex vivo 2D and 3D HSV-2 infection model using human normal vaginal epithelial cells.

    Science.gov (United States)

    Zhu, Yaqi; Yang, Yan; Guo, Juanjuan; Dai, Ying; Ye, Lina; Qiu, Jianbin; Zeng, Zhihong; Wu, Xiaoting; Xing, Yanmei; Long, Xiang; Wu, Xufeng; Ye, Lin; Wang, Shubin; Li, Hui

    2017-02-28

    Herpes simplex virus type 2 (HSV-2) infects human genital mucosa and establishes life-long latent infection. It is unmet need to establish a human cell-based microphysiological system for virus biology and anti-viral drug discovery. One of barriers is lacking of culture system of normal epithelial cells in vitro over decades. In this study, we established human normal vaginal epithelial cell (HNVEC) culture using co-culture system. HNVEC cells were then propagated rapidly and stably in a defined culture condition. HNVEC cells exhibited a normal diploid karyotype and formed the well-defined and polarized spheres in matrigel three-dimension (3D) culture, while malignant cells (HeLa) formed disorganized and nonpolar solid spheres. HNVEC cells had a normal cellular response to DNA damage and had no transforming property using soft agar assays. HNVEC expressed epithelial marker cytokeratin 14 (CK14) and p63, but not cytokeratin 18 (CK18). Next, we reconstructed HNVEC-derived 3D vaginal epithelium using air-liquid interface (ALI) culture. This 3D vaginal epithelium has the basal and apical layers with expression of epithelial markers as its originated human vaginal tissue. Finally, we established an HSV-2 infection model based on the reconstructed 3D vaginal epithelium. After inoculation of HSV-2 (G strain) at apical layer of the reconstructed 3D vaginal epithelium, we observed obvious pathological effects gradually spreading from the apical layer to basal layer with expression of a viral protein. Thus, we established an ex vivo 2D and 3D HSV-2 infection model that can be used for HSV-2 virology and anti-viral drug discovery.

  18. A series of N-terminal epitope tagged Hdh knock-in alleles expressing normal and mutant huntingtin: their application to understanding the effect of increasing the length of normal huntingtin’s polyglutamine stretch on CAG140 mouse model pathogenesis

    Directory of Open Access Journals (Sweden)

    Zheng Shuqiu

    2012-08-01

    Full Text Available Abstract Background Huntington’s disease (HD is an autosomal dominant neurodegenerative disease that is caused by the expansion of a polyglutamine (polyQ stretch within Huntingtin (htt, the protein product of the HD gene. Although studies in vitro have suggested that the mutant htt can act in a potentially dominant negative fashion by sequestering wild-type htt into insoluble protein aggregates, the role of the length of the normal htt polyQ stretch, and the adjacent proline-rich region (PRR in modulating HD mouse model pathogenesis is currently unknown. Results We describe the generation and characterization of a series of knock-in HD mouse models that express versions of the mouse HD gene (Hdh encoding N-terminal hemaglutinin (HA or 3xFlag epitope tagged full-length htt with different polyQ lengths (HA7Q-, 3xFlag7Q-, 3xFlag20Q-, and 3xFlag140Q-htt and substitution of the adjacent mouse PRR with the human PRR (3xFlag20Q- and 3xFlag140Q-htt. Using co-immunoprecipitation and immunohistochemistry analyses, we detect no significant interaction between soluble full-length normal 7Q- htt and mutant (140Q htt, but we do observe N-terminal fragments of epitope-tagged normal htt in mutant htt aggregates. When the sequences encoding normal mouse htt’s polyQ stretch and PRR are replaced with non-pathogenic human sequence in mice also expressing 140Q-htt, aggregation foci within the striatum, and the mean size of htt inclusions are increased, along with an increase in striatal lipofuscin and gliosis. Conclusion In mice, soluble full-length normal and mutant htt are predominantly monomeric. In heterozygous knock-in HD mouse models, substituting the normal mouse polyQ and PRR with normal human sequence can exacerbate some neuropathological phenotypes.

  19. Simple suggestions for including vertical physics in oil spill models

    International Nuclear Information System (INIS)

    D'Asaro, Eric; University of Washington, Seatle, WA

    2001-01-01

    Current models of oil spills include no vertical physics. They neglect the effect of vertical water motions on the transport and concentration of floating oil. Some simple ways to introduce vertical physics are suggested here. The major suggestion is to routinely measure the density stratification of the upper ocean during oil spills in order to develop a database on the effect of stratification. (Author)

  20. An anisotropic shear velocity model of the Earth's mantle using normal modes, body waves, surface waves and long-period waveforms

    Science.gov (United States)

    Moulik, P.; Ekström, G.

    2014-12-01

    We use normal-mode splitting functions in addition to surface wave phase anomalies, body wave traveltimes and long-period waveforms to construct a 3-D model of anisotropic shear wave velocity in the Earth's mantle. Our modelling approach inverts for mantle velocity and anisotropy as well as transition-zone discontinuity topographies, and incorporates new crustal corrections for the splitting functions that are consistent with the non-linear corrections we employ for the waveforms. Our preferred anisotropic model, S362ANI+M, is an update to the earlier model S362ANI, which did not include normal-mode splitting functions in its derivation. The new model has stronger isotropic velocity anomalies in the transition zone and slightly smaller anomalies in the lowermost mantle, as compared with S362ANI. The differences in the mid- to lowermost mantle are primarily restricted to features in the Southern Hemisphere. We compare the isotropic part of S362ANI+M with other recent global tomographic models and show that the level of agreement is higher now than in the earlier generation of models, especially in the transition zone and the lower mantle. The anisotropic part of S362ANI+M is restricted to the upper 300 km in the mantle and is similar to S362ANI. When radial anisotropy is allowed throughout the mantle, large-scale anisotropic patterns are observed in the lowermost mantle with vSV > vSH beneath Africa and South Pacific and vSH > vSV beneath several circum-Pacific regions. The transition zone exhibits localized anisotropic anomalies of ˜3 per cent vSH > vSV beneath North America and the Northwest Pacific and ˜2 per cent vSV > vSH beneath South America. However, small improvements in fits to the data on adding anisotropy at depth leave the question open on whether large-scale radial anisotropy is required in the transition zone and in the lower mantle. We demonstrate the potential of mode-splitting data in reducing the trade-offs between isotropic velocity and

  1. PENENTUAN HARGA KONTRAK OPSI TIPE ASIA MENGGUNAKAN MODEL SIMULASI NORMAL INVERSE GAUSSIAN (NIG

    Directory of Open Access Journals (Sweden)

    I PUTU OKA PARAMARTHA

    2015-02-01

    Full Text Available The aim to determine of the simulation results and to calculate the stock price of Asian Option with Normal Inverse Gaussian (NIG method and Monte Carlo method using MATLAB program. Results of both models are compared and selected a fair price. Besides to determine simulation accuracy of the stock price, speed of program execution MATLAB is calculated for both models for time efficiency. The first part, set variabels used to calculate the trajectory of stock prices at time t to simulate the stock price at the time. The second part, simulate the stock price with NIG model. The third part, simulate the stock price with Monte Carlo model. After simulating the stock price, calculated the value of the pay-off of the Asian Option, and then estimate the price of Asian Option by averaging the entire value of pay-off from each iteration. The last part, compare result of both models. The results of this research is price of Asian Option calculated using Monte Carlo simulation and NIG. The rates were calculated using the NIG produce a fair price, because of the pricing contract NIG using four parameters ?, ?, ?, and ?, while Monte Carlo is using only two parameters ? and ?. For execution time of the program, the Monte Carlo model is better in all iterations.

  2. PENENTUAN HARGA KONTRAK OPSI TIPE ASIA MENGGUNAKAN MODEL SIMULASI NORMAL INVERSE GAUSSIAN (NIG

    Directory of Open Access Journals (Sweden)

    I PUTU OKA PARAMARTHA

    2014-08-01

    Full Text Available The aim to determine of the simulation results and to calculate the stock price of Asian Option with Normal Inverse Gaussian (NIG method and Monte Carlo method using MATLAB program. Results of both models are compared and selected a fair price. Besides to determine simulation accuracy of the stock price, speed of program execution MATLAB is calculated for both models for time efficiency. The first part, set variabels used to calculate the trajectory of stock prices at time t to simulate the stock price at the time. The second part, simulate the stock price with NIG model. The third part, simulate the stock price with Monte Carlo model. After simulating the stock price, calculated the value of the pay-off of the Asian Option, and then estimate the price of Asian Option by averaging the entire value of pay-off from each iteration. The last part, compare result of both models. The results of this research is price of Asian Option calculated using Monte Carlo simulation and NIG. The rates were calculated using the NIG produce a fair price, because of the pricing contract NIG using four parameters ?, ?, ?, and ?, while Monte Carlo is using only two parameters ? and ?. For execution time of the program, the Monte Carlo model is better in all iterations.

  3. Immediate Effect of 3% Diquafosol Ophthalmic Solution on Tear MUC5AC Concentration and Corneal Wetting Ability in Normal and Experimental Keratoconjunctivitis Sicca Rat Models.

    Science.gov (United States)

    Choi, Kwang-Eon; Song, Jong-Suk; Kang, Boram; Eom, Youngsub; Kim, Hyo-Myung

    2017-05-01

    To evaluate the immediate effect of 3% diquafosol ophthalmic solution on tear MUC5AC concentration, periodic acid-Schiff (PAS)-positive goblet cells, and tear film stability in normal and keratoconjunctivitis sicca (KCS) rat models. Rats were divided into normal and KCS groups. 3% of diquafosol solution was instilled into the right eye and normal saline into the left eye in both groups. To determine the peak time of tear MUC5AC concentration, tears were collected after 3% diquafosol instillation every 5 min up to 20 min. The tear film stability and the numbers of PAS-positive goblet cells were compared in both models. After diquafosol instillation, tear MUC5AC concentration increased steadily for 15 min, at which point the MUC5AC concentration reached its peak. In both normal and KCS groups, the MUC5AC concentration at 15 min was higher after instillation of 3% diquafosol solution (17.77 ± 2.09 ng/ml in the normal group, 9.65 ± 3.51 ng/ml in the KCS group) than that after saline instillation (13.74 ± 2.87 ng/ml in the normal group, 8.19 ± 3.99 ng/ml in the KCS group) (p = 0.018 for both). The corneal wetting ability was significantly longer after instillation of 3% diquafosol solution compared with that after instillation of normal saline in the normal group (p = 0.018). The percentage of PAS-positive goblet cells after the instillation of 3% diquafosol solution was significantly lower than that after instillation of normal saline in both models (p = 0.018 for both). Diquafosol ophthalmic solution was effective in stimulating mucin secretion in both normal and KCS rat models, and the peak time of tear MUC5AC concentration was 15 min after diquafosol instillation. The increased tear MUC5AC concentration was accompanied by improved tear film stability and a decreased percentage of PAS-positive goblet cells.

  4. Multivariate relationships between international normalized ratio and vitamin K-dependent coagulation-derived parameters in normal healthy donors and oral anticoagulant therapy patients

    Directory of Open Access Journals (Sweden)

    Golanski Jacek

    2003-11-01

    Full Text Available Abstract Background and objectives International Normalized Ratio (INR is a world-wide routinely used factor in the monitoring of oral anticoagulation treatment (OAT. However, it was reported that other factors, e. g. factor II, may even better reflect therapeutic efficacy of OAT and, therefore, may be potentialy useful for OAT monitoring. The primary purpose of this study was to characterize the associations of INR with other vitamin K-dependent plasma proteins in a heterogenous group of individuals, including healthy donors, patients on OAT and patients not receiving OAT. The study aimed also at establishing the influence of co-morbid conditions (incl. accompanying diseases and co-medications (incl. different intensity of OAT on INR. Design and Methods Two hundred and three subjects were involved in the study. Of these, 35 were normal healthy donors (group I, 73 were patients on medication different than OAT (group II and 95 were patients on stable oral anticoagulant (acenocoumarol therapy lasting for at least half a year prior to the study. The values of INR and activated partial thromboplastin time (APTT ratio, as well as activities of FII, FVII, FX, protein C, and concentration of prothrombin F1+2 fragments and fibrinogen were obtained for all subjects. In statistical evaluation, the uni- and multivariate analyses were employed and the regression equations describing the obtained associations were estimated. Results Of the studied parameters, three (factors II, VII and X appeared as very strong modulators of INR, protein C and prothrombin fragments F1+2 had moderate influence, whereas both APTT ratio and fibrinogen had no significant impact on INR variability. Due to collinearity and low tolerance of independent variables included in the multiple regression models, we routinely employed a ridge multiple regression model which compromises the minimal number of independent variables with the maximal overall determination coefficient. The best

  5. Predicting glucose intolerance with normal fasting plasma glucose by the components of the metabolic syndrome

    International Nuclear Information System (INIS)

    Pei, D.; Lin, J.; Kuo, S.; Wu, D.; Li, J.; Hsieh, C.; Wu, C.; Hung, Y.; Kuo, K.

    2007-01-01

    Surprisingly it is estimated that about half of type 2 diabetics remain undetected. The possible causes may be partly attributable to people with normal fasting plasma glucose (FPG) but abnormal postprandial hyperglycemia. We attempted to develop an effective predictive model by using the metabolic syndrome (MeS) components as parameters to identify such persons. All participants received a standard 75 gm oral glucose tolerance test which showed that 106 had normal glucose tolerance, 61 had impaired glucose tolerance and 6 had diabetes on isolated postchallenge hyperglycemia. We tested five models which included various MeS components. Model 0: FPG; Model 1 (Clinical history model): family history (FH), FPG, age and sex; Model 2 (MeS model): Model 1 plus triglycerides, high-density lipoprotein cholesterol, body mass index, systolic blood pressure and diastolic blood pressure; Model 3: Model 2 plus fasting plasma insulin (FPI); Model 4: Model 3 plus homeostasis model assessment of insulin resistance. A receiver-operating characteristic (ROC) curve was used to determine the predictive discrimination of these models. The area under the ROC curve of the Model 0 was significantly larger than the area under the diagonal reference line. All the other 4 models had a larger area under the ROC curve than Model 0. Considering the simplicity and lower cost of Model 2, it would be the best model to use. Nevertheless, Model 3 had the largest area under the ROC curve. We demonstrated that Model 2 and 3 have a significantly better predictive discrimination to identify persons with normal FPG at high risk for glucose intolerance. (author)

  6. Protection from intracellular oxidative stress by cytoglobin in normal and cancerous oesophageal cells.

    Directory of Open Access Journals (Sweden)

    Fiona E McRonald

    Full Text Available Cytoglobin is an intracellular globin of unknown function that is expressed mostly in cells of a myofibroblast lineage. Possible functions of cytoglobin include buffering of intracellular oxygen and detoxification of reactive oxygen species. Previous work in our laboratory has demonstrated that cytoglobin affords protection from oxidant-induced DNA damage when over expressed in vitro, but the importance of this in more physiologically relevant models of disease is unknown. Cytoglobin is a candidate for the tylosis with oesophageal cancer gene, and its expression is strongly down-regulated in non-cancerous oesophageal biopsies from patients with TOC compared with normal biopsies. Therefore, oesophageal cells provide an ideal experimental model to test our hypothesis that downregulation of cytoglobin expression sensitises cells to the damaging effects of reactive oxygen species, particularly oxidative DNA damage, and that this could potentially contribute to the TOC phenotype. In the current study, we tested this hypothesis by manipulating cytoglobin expression in both normal and oesophageal cancer cell lines, which have normal physiological and no expression of cytoglobin respectively. Our results show that, in agreement with previous findings, over expression of cytoglobin in cancer cell lines afforded protection from chemically-induced oxidative stress but this was only observed at non-physiological concentrations of cytoglobin. In addition, down regulation of cytoglobin in normal oesophageal cells had no effect on their sensitivity to oxidative stress as assessed by a number of end points. We therefore conclude that normal physiological concentrations of cytoglobin do not offer cytoprotection from reactive oxygen species, at least in the current experimental model.

  7. Random Generators and Normal Numbers

    OpenAIRE

    Bailey, David H.; Crandall, Richard E.

    2002-01-01

    Pursuant to the authors' previous chaotic-dynamical model for random digits of fundamental constants, we investigate a complementary, statistical picture in which pseudorandom number generators (PRNGs) are central. Some rigorous results are achieved: We establish b-normality for constants of the form $\\sum_i 1/(b^{m_i} c^{n_i})$ for certain sequences $(m_i), (n_i)$ of integers. This work unifies and extends previously known classes of explicit normals. We prove that for coprime $b,c>1$ the...

  8. Anomalous normal mode oscillations in semiconductor microcavities

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H. [Univ. of Oregon, Eugene, OR (United States). Dept. of Physics; Hou, H.Q.; Hammons, B.E. [Sandia National Labs., Albuquerque, NM (United States)

    1997-04-01

    Semiconductor microcavities as a composite exciton-cavity system can be characterized by two normal modes. Under an impulsive excitation by a short laser pulse, optical polarizations associated with the two normal modes have a {pi} phase difference. The total induced optical polarization is then expected to exhibit a sin{sup 2}({Omega}t)-like oscillation where 2{Omega} is the normal mode splitting, reflecting a coherent energy exchange between the exciton and cavity. In this paper the authors present experimental studies of normal mode oscillations using three-pulse transient four wave mixing (FWM). The result reveals surprisingly that when the cavity is tuned far below the exciton resonance, normal mode oscillation in the polarization is cos{sup 2}({Omega}t)-like, in contrast to what is expected form the simple normal mode model. This anomalous normal mode oscillation reflects the important role of virtual excitation of electronic states in semiconductor microcavities.

  9. Hydromechanical modeling of clay rock including fracture damage

    Science.gov (United States)

    Asahina, D.; Houseworth, J. E.; Birkholzer, J. T.

    2012-12-01

    Argillaceous rock typically acts as a flow barrier, but under certain conditions significant and potentially conductive fractures may be present. Fracture formation is well-known to occur in the vicinity of underground excavations in a region known as the excavation disturbed zone. Such problems are of particular importance for low-permeability, mechanically weak rock such as clays and shales because fractures can be relatively transient as a result of fracture self-sealing processes. Perhaps not as well appreciated is the fact that natural fractures can form in argillaceous rock as a result of hydraulic overpressure caused by phenomena such as disequlibrium compaction, changes in tectonic stress, and mineral dehydration. Overpressure conditions can cause hydraulic fracturing if the fluid pressure leads to tensile effective stresses that exceed the tensile strength of the material. Quantitative modeling of this type of process requires coupling between hydrogeologic processes and geomechanical processes including fracture initiation and propagation. Here we present a computational method for three-dimensional, hydromechanical coupled processes including fracture damage. Fractures are represented as discrete features in a fracture network that interact with a porous rock matrix. Fracture configurations are mapped onto an unstructured, three-dimensonal, Voronoi grid, which is based on a random set of spatial points. Discrete fracture networks (DFN) are represented by the connections of the edges of a Voronoi cells. This methodology has the advantage that fractures can be more easily introduced in response to coupled hydro-mechanical processes and generally eliminates several potential issues associated with the geometry of DFN and numerical gridding. A geomechanical and fracture-damage model is developed here using the Rigid-Body-Spring-Network (RBSN) numerical method. The hydrogelogic and geomechanical models share the same geometrical information from a 3D Voronoi

  10. Prácticas para estimular el parto normal Practices to stimulate normal childbirth

    Directory of Open Access Journals (Sweden)

    Flora Maria Barbosa da Silva

    2011-09-01

    Full Text Available Este artículo lleva a una reflexión sobre las prácticas del estímulo al parto normal, con la fundamentación teórica de cada una de ellas. Las prácticas incluidas en este estudio fueron el ayuno, enemas, spray y baños de inmersión, caminatas, movimientos pélvicos y masaje. En un contexto de revalorización del parto normal, ofrecer a la mujer durante el parto opciones de comodidad basadas en evidencias puede ser una forma de preservar el curso fisiológico del parto.This article leads to a reflection about the practices of encouraging normal childbirth, with the theoretical foundation for each one of them. The practices included in this study were fasting, enema, shower and immersion baths, walking, pelvic movements and massage. In a context of revaluation of normal birth, providing evidence-based comfort options for women during childbirth can be a way to preserve the physiological course of labour.

  11. Direct-phase-variable model of a synchronous reluctance motor including all slot and winding harmonics

    International Nuclear Information System (INIS)

    Obe, Emeka S.; Binder, A.

    2011-01-01

    A detailed model in direct-phase variables of a synchronous reluctance motor operating at mains voltage and frequency is presented. The model includes the stator and rotor slot openings, the actual winding layout and the reluctance rotor geometry. Hence, all mmf and permeance harmonics are taken into account. It is seen that non-negligible harmonics introduced by slots are present in the inductances computed by the winding function procedure. These harmonics are usually ignored in d-q models. The machine performance is simulated in the stator reference frame to depict the difference between this new direct-phase model including all harmonics and the conventional rotor reference frame d-q model. Saturation is included by using a polynomial fitting the variation of d-axis inductance with stator current obtained by finite-element software FEMAG DC (registered) . The detailed phase-variable model can yield torque pulsations comparable to those obtained from finite elements while the d-q model cannot.

  12. Effect of different BNCT protocols on DNA synthesis in precancerous and normal tissues in an experimental model of oral cancer

    International Nuclear Information System (INIS)

    Heber, Elisa M.; Aromando, Romina; Trivillin, Veronica A.; Itoiz, Maria E.; Kreimann, Erica L.; Schwint, Amanda E.; Nigg, David W.

    2006-01-01

    We previously reported the therapeutic success of different BNCT protocols in the treatment of oral cancer, employing the hamster cheek pouch model. The aim of the present study was to evaluate the effect of these BNCT protocols on DNA synthesis in precancerous and normal tissue in this model and assess the potential lag in the development of second primary tumors in precancerous tissue. The data are relevant to potential control of field cancerized tissue and tolerance of normal tissue. We evaluated DNA synthesis in precancerous and normal pouch tissue 1-30 days post-BNCT mediated by BPA, GB-10 or BPA + GB-10 employing incorporation of bromo-deoxyuridine as an end-point. The BNCT-induced potential lag in the development of second primary tumors in precancerous tissue was monitored. A drastic, statistically significant reduction in DNA synthesis occurred in pacancerous tissue as early as 1 day post-BNCT and was sustained at virtually all time points until 30 days post-BNCT for all protocols. The histological categories evaluated individually within precancerous tissue (dysplasia, hyperplasia and NUMF [no unusual microscopic features]) responded similarly. DNA synthesis in normal tissue treated with BNCT oscillated around the very low pre-treatment values. A BNCT-induced lag in the development of second primary tumors was observed. BNCT induced a drastic fall in DNA synthesis in precancerous tissue that would be associated to the observed lag in the development of second primary tumors. The minimum variations in DNA synthesis in BNCT-treated normal tissue would correlate with the absence of normal tissue radiotoxicity. The present data would contribute to optimize therapeutic efficacy in the treatment of field-cancerized areas. (author)

  13. A model of synovial fluid lubricant composition in normal and injured joints

    Directory of Open Access Journals (Sweden)

    M E Blewis

    2007-03-01

    Full Text Available The synovial fluid (SF of joints normally functions as a biological lubricant, providing low-friction and low-wear properties to articulating cartilage surfaces through the putative contributions of proteoglycan 4 (PRG4, hyaluronic acid (HA, and surface active phospholipids (SAPL. These lubricants are secreted by chondrocytes in articular cartilage and synoviocytes in synovium, and concentrated in the synovial space by the semi-permeable synovial lining. A deficiency in this lubricating system may contribute to the erosion of articulating cartilage surfaces in conditions of arthritis. A quantitative intercompartmental model was developed to predict in vivo SF lubricant concentration in the human knee joint. The model consists of a SF compartment that (a is lined by cells of appropriate types, (b is bound by a semi-permeable membrane, and (c contains factors that regulate lubricant secretion. Lubricant concentration was predicted with different chemical regulators of chondrocyte and synoviocyte secretion, and also with therapeutic interventions of joint lavage and HA injection. The model predicted steady-state lubricant concentrations that were within physiologically observed ranges, and which were markedly altered with chemical regulation. The model also predicted that when starting from a zero lubricant concentration after joint lavage, PRG4 reaches steady-state concentration ~10-40 times faster than HA. Additionally, analysis of the clearance rate of HA after therapeutic injection into SF predicted that the majority of HA leaves the joint after ~1-2 days. This quantitative intercompartmental model allows integration of biophysical processes to identify both environmental factors and clinical therapies that affect SF lubricant composition in whole joints.

  14. Normal tissue complication probability modeling for cochlea constraints to avoid causing tinnitus after head-and-neck intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Lee, Tsair-Fwu; Yeh, Shyh-An; Chao, Pei-Ju; Chang, Liyun; Chiu, Chien-Liang; Ting, Hui-Min; Wang, Hung-Yu; Huang, Yu-Jie

    2015-01-01

    Radiation-induced tinnitus is a side effect of radiotherapy in the inner ear for cancers of the head and neck. Effective dose constraints for protecting the cochlea are under-reported. The aim of this study is to determine the cochlea dose limitation to avoid causing tinnitus after head-and-neck cancer (HNC) intensity-modulated radiation therapy (IMRT). In total 211 patients with HNC were included; the side effects of radiotherapy were investigated for 422 inner ears in the cohort. Forty-nine of the four hundred and twenty-two samples (11.6 %) developed grade 2+ tinnitus symptoms after IMRT, as diagnosed by a clinician. The Late Effects of Normal Tissues–Subjective, Objective, Management, Analytic (LENT-SOMA) criteria were used for tinnitus evaluation. The logistic and Lyman-Kutcher-Burman (LKB) normal tissue complication probability (NTCP) models were used for the analyses. The NTCP-fitted parameters were TD 50 = 46.31 Gy (95 % CI, 41.46–52.50), γ 50 = 1.27 (95 % CI, 1.02–1.55), and TD 50 = 46.52 Gy (95 % CI, 41.91–53.43), m = 0.35 (95 % CI, 0.30–0.42) for the logistic and LKB models, respectively. The suggested guideline TD 20 for the tolerance dose to produce a 20 % complication rate within a specific period of time was TD 20 = 33.62 Gy (95 % CI, 30.15–38.27) (logistic) and TD 20 = 32.82 Gy (95 % CI, 29.58–37.69) (LKB). To maintain the incidence of grade 2+ tinnitus toxicity <20 % in IMRT, we suggest that the mean dose to the cochlea should be <32 Gy. However, models should not be extrapolated to other patient populations without further verification and should first be confirmed before clinical implementation

  15. 10 CFR 71.71 - Normal conditions of transport.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Normal conditions of transport. 71.71 Section 71.71 Energy..., Special Form, and LSA-III Tests 2 § 71.71 Normal conditions of transport. (a) Evaluation. Evaluation of each package design under normal conditions of transport must include a determination of the effect on...

  16. Integrated model of port oil piping transportation system safety including operating environment threats

    Directory of Open Access Journals (Sweden)

    Kołowrocki Krzysztof

    2017-06-01

    Full Text Available The paper presents an integrated general model of complex technical system, linking its multistate safety model and the model of its operation process including operating environment threats and considering variable at different operation states its safety structures and its components safety parameters. Under the assumption that the system has exponential safety function, the safety characteristics of the port oil piping transportation system are determined.

  17. Integrated model of port oil piping transportation system safety including operating environment threats

    OpenAIRE

    Kołowrocki, Krzysztof; Kuligowska, Ewa; Soszyńska-Budny, Joanna

    2017-01-01

    The paper presents an integrated general model of complex technical system, linking its multistate safety model and the model of its operation process including operating environment threats and considering variable at different operation states its safety structures and its components safety parameters. Under the assumption that the system has exponential safety function, the safety characteristics of the port oil piping transportation system are determined.

  18. Modeling water vapor and heat transfer in the normal and the intubated airways.

    Science.gov (United States)

    Tawhai, Merryn H; Hunter, Peter J

    2004-04-01

    Intubation of the artificially ventilated patient with an endotracheal tube bypasses the usual conditioning regions of the nose and mouth. In this situation any deficit in heat or moisture in the air is compensated for by evaporation and thermal transfer from the pulmonary airway walls. To study the dynamics of heat and water transport in the intubated airway, a coupled system of nonlinear equations is solved in airway models with symmetric geometry and anatomically based geometry. Radial distribution of heat, water vapor, and velocity in the airway are described by power-law equations. Solution of the time-dependent system of equations yields dynamic airstream and mucosal temperatures and air humidity. Comparison of model results with two independent experimental studies in the normal and intubated airway shows a close correlation over a wide range of minute ventilation. Using the anatomically based model a range of spatially distributed temperature paths is demonstrated, which highlights the model's ability to predict thermal behavior in airway regions currently inaccessible to measurement. Accurate representation of conducting airway geometry is shown to be necessary for simulating mouth-breathing at rates between 15 and 100 l x min(-1), but symmetric geometry is adequate for the low minute ventilation and warm inspired air conditions that are generally supplied to the intubated patient.

  19. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  20. Development of in-situ rock shear test under low compressive to tensile normal stress

    International Nuclear Information System (INIS)

    Nozaki, Takashi; Shin, Koichi

    2003-01-01

    The purpose of this study is to develop an in-situ rock shear testing method to evaluate the shear strength under low normal stress condition including tensile stress, which is usually ignored in the assessment of safety factor of the foundations for nuclear power plants against sliding. The results are as follows. (1) A new in-situ rock shear testing method is devised, in which tensile normal stress can be applied on the shear plane of a specimen by directly pulling up a steel box bonded to the specimen. By applying the counter shear load to cancel the moment induced by the main shear load, it can obtain shear strength under low normal stress. (2) Some model tests on Oya tuff and diatomaceous mudstone have been performed using the developed test method. The shear strength changed smoothly from low values at tensile normal stresses to higher values at compressive normal stresses. The failure criterion has been found to be bi-linear on the shear stress vs normal stress plane. (author)

  1. Radiobiology in clinical radiation therapy - Part III: Normal tissue damage

    International Nuclear Information System (INIS)

    Travis, Elizabeth L.

    1996-01-01

    Objective: This is the third part of a course designed for residents in radiation oncology preparing for their boards. This part of the course will focus on the mechanisms underlying damage in normal tissues. Although conventional wisdom long held that killing and depletion of a critical cell(s) in a tissue was responsible for the later expression of damage, histopathologic changes in normal tissue can now be explained and better understood in terms of the new molecular biology. The concept that depletion of a single cell type is responsible for the observed histopathologic changes in normal tissues has been replaced by the hypothesis that damage results from the interaction of many different cell systems, including epithelial, endothelial, macrophages and fibroblasts, via the production of specific autocrine, paracrine and endocrine growth factors. A portion of this course will discuss the clinical and experimental data on the production and interaction of those cytokines and cell systems considered to be critical to tissue damage. It had long been suggested that interindividual differences in radiation-induced normal tissue damage was genetically regulated, at least in part. Both clinical and experimental data supported this hypothesis but it is the recent advances in human and mouse molecular genetics which have provided the tools to dissect out the genetic component of normal tissue damage. These data will be presented and related to the potential to develop genetic markers to identify sensitive individuals. The impact on clinical outcome of the ability to identify prospectively sensitive patients will be discussed. Clinically it is well-accepted that the volume of tissue irradiated is a critical factor in determining tissue damage. A profusion of mathematical models for estimating dose-volume relationships in a number of organs have been published recently despite the fact that little data are available to support these models. This course will review the

  2. The Normal Fetal Pancreas.

    Science.gov (United States)

    Kivilevitch, Zvi; Achiron, Reuven; Perlman, Sharon; Gilboa, Yinon

    2017-10-01

    The aim of the study was to assess the sonographic feasibility of measuring the fetal pancreas and its normal development throughout pregnancy. We conducted a cross-sectional prospective study between 19 and 36 weeks' gestation. The study included singleton pregnancies with normal pregnancy follow-up. The pancreas circumference was measured. The first 90 cases were tested to assess feasibility. Two hundred ninety-seven fetuses of nondiabetic mothers were recruited during a 3-year period. The overall satisfactory visualization rate was 61.6%. The intraobserver and interobserver variability had high interclass correlation coefficients of of 0.964 and 0.967, respectively. A cubic polynomial regression described best the correlation of pancreas circumference with gestational age (r = 0.744; P pancreas circumference percentiles for each week of gestation were calculated. During the study period, we detected 2 cases with overgrowth syndrome and 1 case with an annular pancreas. In this study, we assessed the feasibility of sonography for measuring the fetal pancreas and established a normal reference range for the fetal pancreas circumference throughout pregnancy. This database can be helpful when investigating fetomaternal disorders that can involve its normal development. © 2017 by the American Institute of Ultrasound in Medicine.

  3. Application of the Oral Minimal Model to Korean Subjects with Normal Glucose Tolerance and Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Min Hyuk Lim

    2016-06-01

    Full Text Available BackgroundThe oral minimal model is a simple, useful tool for the assessment of β-cell function and insulin sensitivity across the spectrum of glucose tolerance, including normal glucose tolerance (NGT, prediabetes, and type 2 diabetes mellitus (T2DM in humans.MethodsPlasma glucose, insulin, and C-peptide levels were measured during a 180-minute, 75-g oral glucose tolerance test in 24 Korean subjects with NGT (n=10 and T2DM (n=14. The parameters in the computational model were estimated, and the indexes for insulin sensitivity and β-cell function were compared between the NGT and T2DM groups.ResultsThe insulin sensitivity index was lower in the T2DM group than the NGT group. The basal index of β-cell responsivity, basal hepatic insulin extraction ratio, and post-glucose challenge hepatic insulin extraction ratio were not different between the NGT and T2DM groups. The dynamic, static, and total β-cell responsivity indexes were significantly lower in the T2DM group than the NGT group. The dynamic, static, and total disposition indexes were also significantly lower in the T2DM group than the NGT group.ConclusionThe oral minimal model can be reproducibly applied to evaluate β-cell function and insulin sensitivity in Koreans.

  4. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  6. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  7. Bifactor model of WISC-IV: Applicability and measurement invariance in low and normal IQ groups.

    Science.gov (United States)

    Gomez, Rapson; Vance, Alasdair; Watson, Shaun

    2017-07-01

    This study examined the applicability and measurement invariance of the bifactor model of the 10 Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) core subtests in groups of children and adolescents (age range from 6 to 16 years) with low (IQ ≤79; N = 229; % male = 75.9) and normal (IQ ≥80; N = 816; % male = 75.0) IQ scores. Results supported this model in both groups, and there was good support for measurement invariance for this model across these groups. For all participants together, the omega hierarchical and explained common variance (ECV) values were high for the general factor and low to negligible for the specific factors. Together, the findings favor the use of the Full Scale IQ (FSIQ) scores of the WISC-IV, but not the subscale index scores. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Enhanced UWB Radio Channel Model for Short-Range Communication Scenarios Including User Dynamics

    DEFF Research Database (Denmark)

    Kovacs, Istvan Zsolt; Nguyen, Tuan Hung; Eggers, Patrick Claus F.

    2005-01-01

    channel model represents an enhancement of the existing IEEE 802.15.3a/4a PAN channel model, where antenna and user-proximity effects are not included. Our investigations showed that significant variations of the received wideband power and time-delay signal clustering are possible due the human body...

  9. Aggregated Demand Modelling Including Distributed Generation, Storage and Demand Response

    OpenAIRE

    Marzooghi, Hesamoddin; Hill, David J.; Verbic, Gregor

    2014-01-01

    It is anticipated that penetration of renewable energy sources (RESs) in power systems will increase further in the next decades mainly due to environmental issues. In the long term of several decades, which we refer to in terms of the future grid (FG), balancing between supply and demand will become dependent on demand actions including demand response (DR) and energy storage. So far, FG feasibility studies have not considered these new demand-side developments for modelling future demand. I...

  10. Development of a multivariable normal tissue complication probability (NTCP) model for tube feeding dependence after curative radiotherapy/chemo-radiotherapy in head and neck cancer

    International Nuclear Information System (INIS)

    Wopken, Kim; Bijl, Hendrik P.; Schaaf, Arjen van der; Laan, Hans Paul van der; Chouvalova, Olga; Steenbakkers, Roel J.H.M.; Doornaert, Patricia; Slotman, Ben J.; Oosting, Sjoukje F.; Christianen, Miranda E.M.C.; Laan, Bernard F.A.M. van der; Roodenburg, Jan L.N.; René Leemans, C.; Verdonck-de Leeuw, Irma M.; Langendijk, Johannes A.

    2014-01-01

    Background and purpose: Curative radiotherapy/chemo-radiotherapy for head and neck cancer (HNC) may result in severe acute and late side effects, including tube feeding dependence. The purpose of this prospective cohort study was to develop a multivariable normal tissue complication probability (NTCP) model for tube feeding dependence 6 months (TUBE M6 ) after definitive radiotherapy, radiotherapy plus cetuximab or concurrent chemoradiation based on pre-treatment and treatment characteristics. Materials and methods: The study included 355 patients with HNC. TUBE M6 was scored prospectively in a standard follow-up program. To design the prediction model, the penalized learning method LASSO was used, with TUBE M6 as the endpoint. Results: The prevalence of TUBE M6 was 10.7%. The multivariable model with the best performance consisted of the variables: advanced T-stage, moderate to severe weight loss at baseline, accelerated radiotherapy, chemoradiation, radiotherapy plus cetuximab, the mean dose to the superior and inferior pharyngeal constrictor muscle, to the contralateral parotid gland and to the cricopharyngeal muscle. Conclusions: We developed a multivariable NTCP model for TUBE M6 to identify patients at risk for tube feeding dependence. The dosimetric variables can be used to optimize radiotherapy treatment planning aiming at prevention of tube feeding dependence and to estimate the benefit of new radiation technologies

  11. Evaluation of gap heat transfer model in ELESTRES for CANDU fuel element under normal operating conditions

    International Nuclear Information System (INIS)

    Lee, Kang Moon; Ohn, Myung Ryong; Im, Hong Sik; Choi, Jong Hoh; Hwang, Soon Taek

    1995-01-01

    The gap conductance between the fuel and the sheath depends strongly on the gap width and has a significant influence on the amount of initial stored energy. The modified Ross and Stoute gap conductance model in ELESTRES is based on a simplified thermal deformation model for steady-state fuel temperature calculations. A review on a series of experiments reveals that fuel pellets crack, relocate, and are eccentrically positioned within the sheath rather than solid concentric cylinders. In this paper, the two recently-proposed gap conductance models (offset gap model and relocated gap model) are described and are applied to calculate the fuel-sheath gap conductances under experimental conditions and normal operating conditions in CANDU reactors. The good agreement between the experimentally-inferred and calculated gap conductance values demonstrates that the modified Ross and Stoute model was implemented correctly in ELESTRES. The predictions of the modified Ross and Stoute model provide conservative values for gap heat transfer and fuel surface temperature compared to the offset gap and relocated gap models for a limiting power envelope. 13 figs., 3 tabs., 16 refs. (Author)

  12. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  13. Tumor vessel normalization after aerobic exercise enhances chemotherapeutic efficacy.

    Science.gov (United States)

    Schadler, Keri L; Thomas, Nicholas J; Galie, Peter A; Bhang, Dong Ha; Roby, Kerry C; Addai, Prince; Till, Jacob E; Sturgeon, Kathleen; Zaslavsky, Alexander; Chen, Christopher S; Ryeom, Sandra

    2016-10-04

    Targeted therapies aimed at tumor vasculature are utilized in combination with chemotherapy to improve drug delivery and efficacy after tumor vascular normalization. Tumor vessels are highly disorganized with disrupted blood flow impeding drug delivery to cancer cells. Although pharmacologic anti-angiogenic therapy can remodel and normalize tumor vessels, there is a limited window of efficacy and these drugs are associated with severe side effects necessitating alternatives for vascular normalization. Recently, moderate aerobic exercise has been shown to induce vascular normalization in mouse models. Here, we provide a mechanistic explanation for the tumor vascular normalization induced by exercise. Shear stress, the mechanical stimuli exerted on endothelial cells by blood flow, modulates vascular integrity. Increasing vascular shear stress through aerobic exercise can alter and remodel blood vessels in normal tissues. Our data in mouse models indicate that activation of calcineurin-NFAT-TSP1 signaling in endothelial cells plays a critical role in exercise-induced shear stress mediated tumor vessel remodeling. We show that moderate aerobic exercise with chemotherapy caused a significantly greater decrease in tumor growth than chemotherapy alone through improved chemotherapy delivery after tumor vascular normalization. Our work suggests that the vascular normalizing effects of aerobic exercise can be an effective chemotherapy adjuvant.

  14. Stochastic Frontier Models with Dependent Errors based on Normal and Exponential Margins || Modelos de frontera estocástica con errores dependientes basados en márgenes normal y exponencial

    Directory of Open Access Journals (Sweden)

    Gómez-Déniz, Emilio

    2017-06-01

    Full Text Available Following the recent work of Gómez-Déniz and Pérez-Rodríguez (2014, this paper extends the results obtained there to the normal-exponential distribution with dependence. Accordingly, the main aim of the present paper is to enhance stochastic production frontier and stochastic cost frontier modelling by proposing a bivariate distribution for dependent errors which allows us to nest the classical models. Closed-form expressions for the error term and technical efficiency are provided. An illustration using real data from the econometric literature is provided to show the applicability of the model proposed. || Continuando el reciente trabajo de Gómez-Déniz y Pérez-Rodríguez (2014, el presente artículo extiende los resultados obtenidos a la distribución normal-exponencial con dependencia. En consecuencia, el principal propósito de este artículo es mejorar el modelado de la frontera estocástica tanto de producción como de coste proponiendo para ello una distribución bivariante para errores dependientes que nos permitan encajar los modelos clásicos. Se obtienen las expresiones en forma cerrada para el término de error y la eficiencia técnica. Se ilustra la aplicabilidad del modelo propouesto usando datos reales existentes en la literatura econométrica.

  15. Score Normalization using Logistic Regression with Expected Parameters

    NARCIS (Netherlands)

    Aly, Robin

    State-of-the-art score normalization methods use generative models that rely on sometimes unrealistic assumptions. We propose a novel parameter estimation method for score normalization based on logistic regression. Experiments on the Gov2 and CluewebA collection indicate that our method is

  16. Reference Priors For Non-Normal Two-Sample Problems

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo, 1992) is applied to locationscale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the dierence, ratio and product of Normal means problems outside Normality, while explicitly

  17. A New Distribution-Random Limit Normal Distribution

    OpenAIRE

    Gong, Xiaolin; Yang, Shuzhen

    2013-01-01

    This paper introduces a new distribution to improve tail risk modeling. Based on the classical normal distribution, we define a new distribution by a series of heat equations. Then, we use market data to verify our model.

  18. Lateral dynamic flight stability of a model hoverfly in normal and inclined stroke-plane hovering

    International Nuclear Information System (INIS)

    Xu, Na; Sun, Mao

    2014-01-01

    Many insects hover with their wings beating in a horizontal plane (‘normal hovering’), while some insects, e.g., hoverflies and dragonflies, hover with inclined stroke-planes. Here, we investigate the lateral dynamic flight stability of a hovering model hoverfly. The aerodynamic derivatives are computed using the method of computational fluid dynamics, and the equations of motion are solved by the techniques of eigenvalue and eigenvector analysis. The following is shown: The flight of the insect is unstable at normal hovering (stroke-plane angle equals 0) and the instability becomes weaker as the stroke-plane angle increases; the flight becomes stable at a relatively large stroke-plane angle (larger than about 24°). As previously shown, the instability at normal hovering is due to a positive roll-moment/side-velocity derivative produced by the ‘changing-LEV-axial-velocity’ effect. When the stroke-plane angle increases, the wings bend toward the back of the body, and the ‘changing-LEV-axial-velocity’ effect decreases; in addition, another effect, called the ‘changing-relative-velocity’ effect (the ‘lateral wind’, which is due to the side motion of the insect, changes the relative velocity of its wings), becomes increasingly stronger. This causes the roll-moment/side-velocity derivative to first decrease and then become negative, resulting in the above change in stability as a function of the stroke-plane angle. (paper)

  19. Galectin-1 Inhibitor OTX008 Induces Tumor Vessel Normalization and Tumor Growth Inhibition in Human Head and Neck Squamous Cell Carcinoma Models.

    Science.gov (United States)

    Koonce, Nathan A; Griffin, Robert J; Dings, Ruud P M

    2017-12-09

    Galectin-1 is a hypoxia-regulated protein and a prognostic marker in head and neck squamous cell carcinomas (HNSCC). Here we assessed the ability of non-peptidic galectin-1 inhibitor OTX008 to improve tumor oxygenation levels via tumor vessel normalization as well as tumor growth inhibition in two human HNSCC tumor models, the human laryngeal squamous carcinoma SQ20B and the human epithelial type 2 HEp-2. Tumor-bearing mice were treated with OTX008, Anginex, or Avastin and oxygen levels were determined by fiber-optics and molecular marker pimonidazole binding. Immuno-fluorescence was used to determine vessel normalization status. Continued OTX008 treatment caused a transient reoxygenation in SQ20B tumors peaking on day 14, while a steady increase in tumor oxygenation was observed over 21 days in the HEp-2 model. A >50% decrease in immunohistochemical staining for tumor hypoxia verified the oxygenation data measured using a partial pressure of oxygen (pO₂) probe. Additionally, OTX008 induced tumor vessel normalization as tumor pericyte coverage increased by approximately 40% without inducing any toxicity. Moreover, OTX008 inhibited tumor growth as effectively as Anginex and Avastin, except in the HEp-2 model where Avastin was found to suspend tumor growth. Galectin-1 inhibitor OTX008 transiently increased overall tumor oxygenation via vessel normalization to various degrees in both HNSCC models. These findings suggest that targeting galectin-1-e.g., by OTX008-may be an effective approach to treat cancer patients as stand-alone therapy or in combination with other standards of care.

  20. Unique properties associated with normal martensitic transition and strain glass transition – A simulation study

    International Nuclear Information System (INIS)

    Wang, Dong; Ni, Yan; Gao, Jinghui; Zhang, Zhen; Ren, Xiaobing; Wang, Yunzhi

    2013-01-01

    Highlights: ► We model the unique properties of strain glass which is different from that of normal martensite. ► We describe the importance of point defects in the formation of strain glass and related properties. ► The role of point defect can be attributed to global transition temperature effect (GTTE) and local field effect (LFE). -- Abstract: The transition behavior and unique properties associated with normal martensitic transition and strain glass transition are investigated by computer simulations using the phase field method. The simulations are based on a physical model that assumes that point defects alter the thermodynamic stability of martensite and create local lattice distortion. The simulation results show that strain glass transition exhibits different properties from those found in normal martensitic transformations. These unique properties include diffuse scattering pattern, “smear” elastic modulus peak, disappearance of heat flow peak and non-ergodicity. These simulation predictions agree well with the experimental observations

  1. S5-4: Formal Modeling of Affordance in Human-Included Systems

    Directory of Open Access Journals (Sweden)

    Namhun Kim

    2012-10-01

    Full Text Available In spite of it being necessary for humans to consider modeling, analysis, and control of human-included systems, it has been considered a challenging problem because of the critical role of humans in complex systems and of humans' capability of executing unanticipated actions–both beneficial and detrimental ones. Thus, to provide systematic approaches to modeling human actions as a part of system behaviors, a formal modeling framework for human-involved systems in which humans play a controlling role based on their perceptual information is presented. The theory of affordance provides definitions of human actions and their associated properties; Finite State Automata (FSA based modeling is capable of mapping nondeterministic humans into computable components in the system representation. In this talk, we investigate the role of perception in human actions in the system operation and examine the representation of perceptual elements in affordance-based modeling formalism. The proposed framework is expected to capture the natural ways in which humans participate in the system as part of its operation. A human-machine cooperative manufacturing system control example and a human agent simulation example will be introduced for the illustrative purposes at the end of the presentation.

  2. Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1)

    Science.gov (United States)

    Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.

    2012-01-01

    Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381

  3. Description of the new version 4.0 of the tritium model UFOTRI including user guide

    International Nuclear Information System (INIS)

    Raskob, W.

    1993-08-01

    In view of the future operation of fusion reactors the release of tritium may play a dominant role during normal operation as well as after accidents. Because of its physical and chemical properties which differ significantly from those of other radionuclides, the model UFOTRI for assessing the radiological consequences of accidental tritium releases has been developed. It describes the behaviour of tritium in the biosphere and calculates the radiological impact on individuals and the population due to the direct exposure and by the ingestion pathways. Processes such as the conversion of tritium gas into tritiated water (HTO) in the soil, re-emission after deposition and the conversion of HTO into organically bound tritium, are considered. The use of UFOTRI in its probabilistic mode shows the spectrum of the radiological impact together with the associated probability of occurrence. A first model version was established in 1991. As the ongoing work on investigating the main processes of the tritium behaviour in the environment shows up new results, the model has been improved in several points. The report describes the changes incorporated into the model since 1991. Additionally provides the up-dated user guide for handling the revised UFOTRI version which will be distributed to interested organizations. (orig.) [de

  4. A Mathematical Framework for Critical Transitions: Normal Forms, Variance and Applications

    Science.gov (United States)

    Kuehn, Christian

    2013-06-01

    Critical transitions occur in a wide variety of applications including mathematical biology, climate change, human physiology and economics. Therefore it is highly desirable to find early-warning signs. We show that it is possible to classify critical transitions by using bifurcation theory and normal forms in the singular limit. Based on this elementary classification, we analyze stochastic fluctuations and calculate scaling laws of the variance of stochastic sample paths near critical transitions for fast-subsystem bifurcations up to codimension two. The theory is applied to several models: the Stommel-Cessi box model for the thermohaline circulation from geoscience, an epidemic-spreading model on an adaptive network, an activator-inhibitor switch from systems biology, a predator-prey system from ecology and to the Euler buckling problem from classical mechanics. For the Stommel-Cessi model we compare different detrending techniques to calculate early-warning signs. In the epidemics model we show that link densities could be better variables for prediction than population densities. The activator-inhibitor switch demonstrates effects in three time-scale systems and points out that excitable cells and molecular units have information for subthreshold prediction. In the predator-prey model explosive population growth near a codimension-two bifurcation is investigated and we show that early-warnings from normal forms can be misleading in this context. In the biomechanical model we demonstrate that early-warning signs for buckling depend crucially on the control strategy near the instability which illustrates the effect of multiplicative noise.

  5. PREDICTION OF BLOOD PATTERN IN S-SHAPED MODEL OF ARTERY UNDER NORMAL BLOOD PRESSURE

    Directory of Open Access Journals (Sweden)

    Mohd Azrul Hisham Mohd Adib

    2013-06-01

    Full Text Available Athletes are susceptible to a wide variety of traumatic and non-traumatic vascular injuries to the lower limb. This paper aims to predict the three-dimensional flow pattern of blood through an S-shaped geometrical artery model. This model has created by using Fluid Structure Interaction (FSI software. The modeling of the geometrical S-shaped artery is suitable for understanding the pattern of blood flow under constant normal blood pressure. In this study, a numerical method is used that works on the assumption that the blood is incompressible and Newtonian; thus, a laminar type of flow can be considered. The authors have compared the results with a previous study with FSI validation simulation. The validation and verification of the simulation studies is performed by comparing the maximum velocity at t = 0.4 s, because at this time, the blood accelerates rapidly. In addition, the resulting blood flow at various times, under the same boundary conditions in the S-shaped geometrical artery model, is presented. The graph shows that velocity increases linearly with time. Thus, it can be concluded that the flow of blood increases with respect to the pressure inside the body.

  6. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Exact scaling solutions in normal and Brans-Dicke models of dark energy

    International Nuclear Information System (INIS)

    Arias, Olga; Gonzalez, Tame; Leyva, Yoelsy; Quiros, Israel

    2003-01-01

    A linear relationship between the Hubble expansion parameter and the time derivative of the scalar field is explored in order to derive exact cosmological, attractor-like solutions, both in Einstein's theory and in Brans-Dicke gravity with two fluids: a background fluid of ordinary matter and a self-interacting scalar-field fluid accounting for the dark energy in the universe. A priori assumptions about the functional form of the self-interaction potential or about the scale factor behaviour are not necessary. These are obtained as outputs of the assumed relationship between the Hubble parameter and the time derivative of the scalar field. A parametric class of scaling quintessence models given by a self-interaction potential of a peculiar form, a combination of exponentials with dependence on the barotropic index of the background fluid, arises. Both normal quintessence described by a self-interacting scalar field minimally coupled to gravity and Brans-Dicke quintessence given by a non-minimally coupled scalar field are then analysed and the relevance of these models for the description of the cosmic evolution is discussed in some detail. The stability of these solutions is also briefly commented on

  8. Continuum modeling of ion-beam eroded surfaces under normal incidence: Impact of stochastic fluctuations

    International Nuclear Information System (INIS)

    Dreimann, Karsten; Linz, Stefan J.

    2010-01-01

    Graphical abstract: Deterministic surface pattern (left) and its stochastic counterpart (right) arising in a stochastic damped Kuramoto-Sivashinsky equation that serves as a model equation for ion-beam eroded surfaces and is systematically investigated. - Abstract: Using a recently proposed field equation for the surface evolution of ion-beam eroded semiconductor target materials under normal incidence, we systematically explore the impact of additive stochastic fluctuations that are permanently present during the erosion process. Specifically, we investigate the dependence of the surface roughness, the underlying pattern forming properties and the bifurcation behavior on the strength of the fluctuations.

  9. A thermal conductivity model for nanofluids including effect of the temperature-dependent interfacial layer

    International Nuclear Information System (INIS)

    Sitprasert, Chatcharin; Dechaumphai, Pramote; Juntasaro, Varangrat

    2009-01-01

    The interfacial layer of nanoparticles has been recently shown to have an effect on the thermal conductivity of nanofluids. There is, however, still no thermal conductivity model that includes the effects of temperature and nanoparticle size variations on the thickness and consequently on the thermal conductivity of the interfacial layer. In the present work, the stationary model developed by Leong et al. (J Nanopart Res 8:245-254, 2006) is initially modified to include the thermal dispersion effect due to the Brownian motion of nanoparticles. This model is called the 'Leong et al.'s dynamic model'. However, the Leong et al.'s dynamic model over-predicts the thermal conductivity of nanofluids in the case of the flowing fluid. This suggests that the enhancement in the thermal conductivity of the flowing nanofluids due to the increase in temperature does not come from the thermal dispersion effect. It is more likely that the enhancement in heat transfer of the flowing nanofluids comes from the temperature-dependent interfacial layer effect. Therefore, the Leong et al.'s stationary model is again modified to include the effect of temperature variation on the thermal conductivity of the interfacial layer for different sizes of nanoparticles. This present model is then evaluated and compared with the other thermal conductivity models for the turbulent convective heat transfer in nanofluids along a uniformly heated tube. The results show that the present model is more general than the other models in the sense that it can predict both the temperature and the volume fraction dependence of the thermal conductivity of nanofluids for both non-flowing and flowing fluids. Also, it is found to be more accurate than the other models due to the inclusion of the effect of the temperature-dependent interfacial layer. In conclusion, the present model can accurately predict the changes in thermal conductivity of nanofluids due to the changes in volume fraction and temperature for

  10. Quantum arrival times and operator normalization

    International Nuclear Information System (INIS)

    Hegerfeldt, Gerhard C.; Seidel, Dirk; Gonzalo Muga, J.

    2003-01-01

    A recent approach to arrival times used the fluorescence of an atom entering a laser illuminated region, and the resulting arrival-time distribution was close to the axiomatic distribution of Kijowski, but not exactly equal, neither in limiting cases nor after compensation of reflection losses by normalization on the level of expectation values. In this paper we employ a normalization on the level of operators, recently proposed in a slightly different context. We show that in this case the axiomatic arrival-time distribution of Kijowski is recovered as a limiting case. In addition, it is shown that Allcock's complex potential model is also a limit of the physically motivated fluorescence approach and connected to Kijowski's distribution through operator normalization

  11. Normal radiographic findings. 4. act. ed.

    International Nuclear Information System (INIS)

    Moeller, T.B.

    2003-01-01

    This book can serve the reader in three ways: First, it presents normal findings for all radiographic techniques including KM. Important data which are criteria of normal findings are indicated directly in the pictures and are also explained in full text and in summary form. Secondly, it teaches the systematics of interpreting a picture - how to look at it, what structures to regard in what order, and for what to look in particular. Checklists are presented in each case. Thirdly, findings are formulated in accordance with the image analysis procedure. All criteria of normal findings are defined in these formulations, which make them an important didactic element. (orig.)

  12. Mixed normal inference on multicointegration

    NARCIS (Netherlands)

    Boswijk, H.P.

    2009-01-01

    Asymptotic likelihood analysis of cointegration in I(2) models, see Johansen (1997, 2006), Boswijk (2000) and Paruolo (2000), has shown that inference on most parameters is mixed normal, implying hypothesis test statistics with an asymptotic 2 null distribution. The asymptotic distribution of the

  13. Spinal cord normalization in multiple sclerosis.

    Science.gov (United States)

    Oh, Jiwon; Seigo, Michaela; Saidha, Shiv; Sotirchos, Elias; Zackowski, Kathy; Chen, Min; Prince, Jerry; Diener-West, Marie; Calabresi, Peter A; Reich, Daniel S

    2014-01-01

    Spinal cord (SC) pathology is common in multiple sclerosis (MS), and measures of SC-atrophy are increasingly utilized. Normalization reduces biological variation of structural measurements unrelated to disease, but optimal parameters for SC volume (SCV)-normalization remain unclear. Using a variety of normalization factors and clinical measures, we assessed the effect of SCV normalization on detecting group differences and clarifying clinical-radiological correlations in MS. 3T cervical SC-MRI was performed in 133 MS cases and 11 healthy controls (HC). Clinical assessment included expanded disability status scale (EDSS), MS functional composite (MSFC), quantitative hip-flexion strength ("strength"), and vibration sensation threshold ("vibration"). SCV between C3 and C4 was measured and normalized individually by subject height, SC-length, and intracranial volume (ICV). There were group differences in raw-SCV and after normalization by height and length (MS vs. HC; progressive vs. relapsing MS-subtypes, P normalization by length (EDSS:r = -.43; MSFC:r = .33; strength:r = .38; vibration:r = -.40), and height (EDSS:r = -.26; MSFC:r = .28; strength:r = .22; vibration:r = -.29), but diminished with normalization by ICV (EDSS:r = -.23; MSFC:r = -.10; strength:r = .23; vibration:r = -.35). In relapsing MS, normalization by length allowed statistical detection of correlations that were not apparent with raw-SCV. SCV-normalization by length improves the ability to detect group differences, strengthens clinical-radiological correlations, and is particularly relevant in settings of subtle disease-related SC-atrophy in MS. SCV-normalization by length may enhance the clinical utility of measures of SC-atrophy. Copyright © 2014 by the American Society of Neuroimaging.

  14. Cerebellar abnormalities contribute to disability including cognitive impairment in multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Katrin Weier

    Full Text Available The cerebellum is known to be involved not only in motor but also cognitive and affective processes. Structural changes in the cerebellum in relation to cognitive dysfunction are an emerging topic in the field of neuro-psychiatric disorders. In Multiple Sclerosis (MS cerebellar motor and cognitive dysfunction occur in parallel, early in the onset of the disease, and the cerebellum is one of the predilection sites of atrophy. This study is aimed at determining the relationship between cerebellar volumes, clinical cerebellar signs, cognitive functioning and fatigue in MS. Cerebellar volumetry was conducted using T1-weighted MPRAGE magnetic resonance imaging of 172 MS patients. All patients underwent a clinical and brief neuropsychological assessment (information processing speed, working memory, including fatigue testing. Patients with and without cerebellar signs differed significantly regarding normalized cerebellar total volume (nTCV, normalized brain volume (nBV and whole brain T2 lesion volume (LV. Patients with cerebellar dysfunction likewise performed worse in cognitive tests. A regression analysis indicated that age and nTCV explained 26.3% of the variance in SDMT (symbol digit modalities test performance. However, only age, T2 LV and nBV remained predictors in the full model (r(2 = 0.36. The full model for the prediction of PASAT (Paced Auditory Serial Addition Test scores (r(2 = 0.23 included age, cerebellar and T2 LV. In the case of fatigue, only age and nBV (r(2 = 0.17 emerged as significant predictors. These data support the view that cerebellar abnormalities contribute to disability, including cognitive impairment in MS. However, this contribution does not seem to be independent of, and may even be dominated by wider spread MS pathology as reflected by nBV and T2 LV.

  15. The effect of inertia, viscous damping, temperature and normal stress on chaotic behaviour of the rate and state friction model

    Science.gov (United States)

    Sinha, Nitish; Singh, Arun K.; Singh, Trilok N.

    2018-04-01

    A fundamental understanding of frictional sliding at rock surfaces is of practical importance for nucleation and propagation of earthquakes and rock slope stability. We investigate numerically the effect of different physical parameters such as inertia, viscous damping, temperature and normal stress on the chaotic behaviour of the two state variables rate and state friction (2sRSF) model. In general, a slight variation in any of inertia, viscous damping, temperature and effective normal stress reduces the chaotic behaviour of the sliding system. However, the present study has shown the appearance of chaos for the specific values of normal stress before it disappears again as the normal stress varies further. It is also observed that magnitude of system stiffness at which chaotic motion occurs, is less than the corresponding value of critical stiffness determined by using the linear stability analysis. These results explain the practical observation why chaotic nucleation of an earthquake is a rare phenomenon as reported in literature.

  16. Evaluation of factors in development of Vis/NIR spectroscopy models for discriminating PSE, DFD and normal broiler breast meat

    Science.gov (United States)

    1. To evaluate the performance of visible and near-infrared (Vis/NIR) spectroscopic models for discriminating true pale, soft and exudative (PSE), normal and dark, firm and dry (DFD) broiler breast meat in different conditions of preprocessing methods, spectral ranges, characteristic wavelength sele...

  17. CT of Normal Developmental and Variant Anatomy of the Pediatric Skull: Distinguishing Trauma from Normality.

    Science.gov (United States)

    Idriz, Sanjin; Patel, Jaymin H; Ameli Renani, Seyed; Allan, Rosemary; Vlahos, Ioannis

    2015-01-01

    The use of computed tomography (CT) in clinical practice has been increasing rapidly, with the number of CT examinations performed in adults and children rising by 10% per year in England. Because the radiology community strives to reduce the radiation dose associated with pediatric examinations, external factors, including guidelines for pediatric head injury, are raising expectations for use of cranial CT in the pediatric population. Thus, radiologists are increasingly likely to encounter pediatric head CT examinations in daily practice. The variable appearance of cranial sutures at different ages can be confusing for inexperienced readers of radiologic images. The evolution of multidetector CT with thin-section acquisition increases the clarity of some of these sutures, which may be misinterpreted as fractures. Familiarity with the normal anatomy of the pediatric skull, how it changes with age, and normal variants can assist in translating the increased resolution of multidetector CT into more accurate detection of fractures and confident determination of normality, thereby reducing prolonged hospitalization of children with normal developmental structures that have been misinterpreted as fractures. More important, the potential morbidity and mortality related to false-negative interpretation of fractures as normal sutures may be avoided. The authors describe the normal anatomy of all standard pediatric sutures, common variants, and sutural mimics, thereby providing an accurate and safe framework for CT evaluation of skull trauma in pediatric patients. (©)RSNA, 2015.

  18. Crack growth resistance for anisotropic plasticity with non-normality effects

    DEFF Research Database (Denmark)

    Tvergaard, Viggo; Legarth, Brian Nyvang

    2006-01-01

    For a plastically anisotropic solid a plasticity model using a plastic flow rule with non-normality is applied to predict crack growth. The fracture process is modelled in terms of a traction–separation law specified on the crack plane. A phenomenological elastic–viscoplastic material model...... is applied, using one of two different anisotropic yield criteria to account for the plastic anisotropy, and in each case the effect of the normality flow rule is compared with the effect of non-normality. Conditions of small scale yielding are assumed, with mode I loading conditions far from the crack......-tip, and various directions of the crack plane relative to the principal axes of the anisotropy are considered. It is found that the steady-state fracture toughness is significantly reduced when the non-normality flow rule is used. Furthermore, it is shown that the predictions are quite sensitive to the value...

  19. Multivariate normal tissue complication probability modeling of gastrointestinal toxicity after external beam radiotherapy for localized prostate cancer

    International Nuclear Information System (INIS)

    Cella, Laura; D’Avino, Vittoria; Liuzzi, Raffaele; Conson, Manuel; Doria, Francesca; Faiella, Adriana; Loffredo, Filomena; Salvatore, Marco; Pacelli, Roberto

    2013-01-01

    The risk of radio-induced gastrointestinal (GI) complications is affected by several factors other than the dose to the rectum such as patient characteristics, hormonal or antihypertensive therapy, and acute rectal toxicity. Purpose of this work is to study clinical and dosimetric parameters impacting on late GI toxicity after prostate external beam radiotherapy (RT) and to establish multivariate normal tissue complication probability (NTCP) model for radiation-induced GI complications. A total of 57 men who had undergone definitive RT for prostate cancer were evaluated for GI events classified using the RTOG/EORTC scoring system. Their median age was 73 years (range 53–85). The patients were assessed for GI toxicity before, during, and periodically after RT completion. Several clinical variables along with rectum dose-volume parameters (Vx) were collected and their correlation to GI toxicity was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling techniques was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). At a median follow-up of 30 months, 37% (21/57) patients developed G1-2 acute GI events while 33% (19/57) were diagnosed with G1-2 late GI events. An NTCP model for late mild/moderate GI toxicity based on three variables including V65 (OR = 1.03), antihypertensive and/or anticoagulant (AH/AC) drugs (OR = 0.24), and acute GI toxicity (OR = 4.3) was selected as the most predictive model (Rs = 0.47, p < 0.001; AUC = 0.79). This three-variable model outperforms the logistic model based on V65 only (Rs = 0.28, p < 0.001; AUC = 0.69). We propose a logistic NTCP model for late GI toxicity considering not only rectal irradiation dose but also clinical patient-specific factors. Accordingly, the risk of G1-2 late GI increases as V65 increases, it is higher for patients experiencing

  20. Environmental dosimetry for normal operations at SRP. Revision 1

    International Nuclear Information System (INIS)

    Marter, W.L.

    1984-01-01

    The radiological effect of environmental releases from SRP during normal operations has been assessed annually since 1972 with a dosimetry model developed by SRL in 1971 to 1972, as implemented in the MREM code for atmospheric releases and RIVDOSE code for liquid releases. Starting in 1978, SRL started using environmental models and dose commitment factors developed by Nuclear Regulatory Commission (NRC) for all other environmental dose calculations. The NRC models are more flexible than the older SRL models, use more up-to-date methodologies, cover more exposure pathways, and permit more detailed analysis of effects of normal operations. It is recommended that the NRC models, as implemented in the computer codes X0QD0Q and GASPAR for atmospheric releases and LADTAP for liquid releases, and NRC dose commitment factors be used as the standard method at SRP for assessing offsite dose from normal operations in Health Protection Department annual environmental monitoring reports, and in National Environmental Policy Act documents and Safety Analysis Reports for SRP facilities. 23 references, 3 figures, 9 tables

  1. Modeling of Temperature-Dependent Noise in Silicon Nanowire FETs including Self-Heating Effects

    Directory of Open Access Journals (Sweden)

    P. Anandan

    2014-01-01

    Full Text Available Silicon nanowires are leading the CMOS era towards the downsizing limit and its nature will be effectively suppress the short channel effects. Accurate modeling of thermal noise in nanowires is crucial for RF applications of nano-CMOS emerging technologies. In this work, a perfect temperature-dependent model for silicon nanowires including the self-heating effects has been derived and its effects on device parameters have been observed. The power spectral density as a function of thermal resistance shows significant improvement as the channel length decreases. The effects of thermal noise including self-heating of the device are explored. Moreover, significant reduction in noise with respect to channel thermal resistance, gate length, and biasing is analyzed.

  2. A scan statistic for continuous data based on the normal probability model

    Directory of Open Access Journals (Sweden)

    Huang Lan

    2009-10-01

    Full Text Available Abstract Temporal, spatial and space-time scan statistics are commonly used to detect and evaluate the statistical significance of temporal and/or geographical disease clusters, without any prior assumptions on the location, time period or size of those clusters. Scan statistics are mostly used for count data, such as disease incidence or mortality. Sometimes there is an interest in looking for clusters with respect to a continuous variable, such as lead levels in children or low birth weight. For such continuous data, we present a scan statistic where the likelihood is calculated using the the normal probability model. It may also be used for other distributions, while still maintaining the correct alpha level. In an application of the new method, we look for geographical clusters of low birth weight in New York City.

  3. Predicating magnetorheological effect of magnetorheological elastomers under normal pressure

    International Nuclear Information System (INIS)

    Dong, X; Qi, M; Ma, N; Ou, J

    2013-01-01

    Magnetorheological elastomers (MREs) present reversible change in shear modulus in an applied magnetic field. For applications and tests of MREs, a normal pressure must be applied on the materials. However, little research paid attention on the effect of the normal pressure on properties of MREs. In this study, a theoretical model is established based on the effective permeability rule and the consideration of the normal pressure. The results indicate that the normal pressure have great influence on magnetic field-induced shear modulus. The shear modulus of MREs increases with increasing normal pressure, such dependence is more significant at high magnetic field levels.

  4. Relating normalization to neuronal populations across cortical areas.

    Science.gov (United States)

    Ruff, Douglas A; Alberts, Joshua J; Cohen, Marlene R

    2016-09-01

    Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. Copyright © 2016 the American Physiological Society.

  5. Seepage Model for PA Including Drift Collapse

    International Nuclear Information System (INIS)

    Li, G.; Tsang, C.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M andO 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M andO 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in niches and in the cross drift to

  6. Seepage Model for PA Including Dift Collapse

    Energy Technology Data Exchange (ETDEWEB)

    G. Li; C. Tsang

    2000-12-20

    The purpose of this Analysis/Model Report (AMR) is to document the predictions and analysis performed using the Seepage Model for Performance Assessment (PA) and the Disturbed Drift Seepage Submodel for both the Topopah Spring middle nonlithophysal and lower lithophysal lithostratigraphic units at Yucca Mountain. These results will be used by PA to develop the probability distribution of water seepage into waste-emplacement drifts at Yucca Mountain, Nevada, as part of the evaluation of the long term performance of the potential repository. This AMR is in accordance with the ''Technical Work Plan for Unsaturated Zone (UZ) Flow and Transport Process Model Report'' (CRWMS M&O 2000 [153447]). This purpose is accomplished by performing numerical simulations with stochastic representations of hydrological properties, using the Seepage Model for PA, and evaluating the effects of an alternative drift geometry representing a partially collapsed drift using the Disturbed Drift Seepage Submodel. Seepage of water into waste-emplacement drifts is considered one of the principal factors having the greatest impact of long-term safety of the repository system (CRWMS M&O 2000 [153225], Table 4-1). This AMR supports the analysis and simulation that are used by PA to develop the probability distribution of water seepage into drift, and is therefore a model of primary (Level 1) importance (AP-3.15Q, ''Managing Technical Product Inputs''). The intended purpose of the Seepage Model for PA is to support: (1) PA; (2) Abstraction of Drift-Scale Seepage; and (3) Unsaturated Zone (UZ) Flow and Transport Process Model Report (PMR). Seepage into drifts is evaluated by applying numerical models with stochastic representations of hydrological properties and performing flow simulations with multiple realizations of the permeability field around the drift. The Seepage Model for PA uses the distribution of permeabilities derived from air injection testing in

  7. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    Science.gov (United States)

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  8. Derivation of three closed loop kinematic velocity models using normalized quaternion feedback for an autonomous redundant manipulator with application to inverse kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Unseren, M.A.

    1993-04-01

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associated with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.

  9. Derivation of three closed loop kinematic velocity models using normalized quaternion feedback for an autonomous redundant manipulator with application to inverse kinematics

    International Nuclear Information System (INIS)

    Unseren, M.A.

    1993-04-01

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associated with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993

  10. Neuronal Function in Male Sprague Dawley Rats During Normal Ageing.

    Science.gov (United States)

    Idowu, A J; Olatunji-Bello, I I; Olagunju, J A

    2017-03-06

    During normal ageing, there are physiological changes especially in high energy demanding tissues including the brain and skeletal muscles. Ageing may disrupt homeostasis and allow tissue vulnerability to disease. To establish an appropriate animal model which is readily available and will be useful to test therapeutic strategies during normal ageing, we applied behavioral approaches to study age-related changes in memory and motor function as a basis for neuronal function in ageing in male Sprague Dawley rats. 3 months, n=5; 6 months, n=5 and 18 months, n=5 male Sprague Dawley Rats were tested using the Novel Object Recognition Task (NORT) and the Elevated plus Maze (EPM) Test. Data was analyzed by ANOVA and the Newman-Keuls post hoc test. The results showed an age-related gradual decline in exploratory behavior and locomotor activity with increasing age in 3 months, 6 months and 18 months old rats, although the values were not statistically significant, but grooming activity significantly increased with increasing age. Importantly, we established a novel finding that the minimum distance from the novel object was statistically significant between 3 months and 18 months old rats and this may be an index for age-related memory impairment in the NORT. Altogether, we conclude that the male Sprague Dawley rat show age-related changes in neuronal function and may be a useful model for carrying out investigations into the mechanisms involved in normal ageing.

  11. Particle-based modeling of heterogeneous chemical kinetics including mass transfer.

    Science.gov (United States)

    Sengar, A; Kuipers, J A M; van Santen, Rutger A; Padding, J T

    2017-08-01

    Connecting the macroscopic world of continuous fields to the microscopic world of discrete molecular events is important for understanding several phenomena occurring at physical boundaries of systems. An important example is heterogeneous catalysis, where reactions take place at active surfaces, but the effective reaction rates are determined by transport limitations in the bulk fluid and reaction limitations on the catalyst surface. In this work we study the macro-micro connection in a model heterogeneous catalytic reactor by means of stochastic rotation dynamics. The model is able to resolve the convective and diffusive interplay between participating species, while including adsorption, desorption, and reaction processes on the catalytic surface. Here we apply the simulation methodology to a simple straight microchannel with a catalytic strip. Dimensionless Damkohler numbers are used to comment on the spatial concentration profiles of reactants and products near the catalyst strip and in the bulk. We end the discussion with an outlook on more complicated geometries and increasingly complex reactions.

  12. Particle-based modeling of heterogeneous chemical kinetics including mass transfer

    Science.gov (United States)

    Sengar, A.; Kuipers, J. A. M.; van Santen, Rutger A.; Padding, J. T.

    2017-08-01

    Connecting the macroscopic world of continuous fields to the microscopic world of discrete molecular events is important for understanding several phenomena occurring at physical boundaries of systems. An important example is heterogeneous catalysis, where reactions take place at active surfaces, but the effective reaction rates are determined by transport limitations in the bulk fluid and reaction limitations on the catalyst surface. In this work we study the macro-micro connection in a model heterogeneous catalytic reactor by means of stochastic rotation dynamics. The model is able to resolve the convective and diffusive interplay between participating species, while including adsorption, desorption, and reaction processes on the catalytic surface. Here we apply the simulation methodology to a simple straight microchannel with a catalytic strip. Dimensionless Damkohler numbers are used to comment on the spatial concentration profiles of reactants and products near the catalyst strip and in the bulk. We end the discussion with an outlook on more complicated geometries and increasingly complex reactions.

  13. Trajectories of cortical surface area and cortical volume maturation in normal brain development

    Directory of Open Access Journals (Sweden)

    Simon Ducharme

    2015-12-01

    Full Text Available This is a report of developmental trajectories of cortical surface area and cortical volume in the NIH MRI Study of Normal Brain Development. The quality-controlled sample included 384 individual typically-developing subjects with repeated scanning (1–3 per subject, total scans n=753 from 4.9 to 22.3 years of age. The best-fit model (cubic, quadratic, or first-order linear was identified at each vertex using mixed-effects models, with statistical correction for multiple comparisons using random field theory. Analyses were performed with and without controlling for total brain volume. These data are provided for reference and comparison with other databases. Further discussion and interpretation on cortical developmental trajectories can be found in the associated Ducharme et al.׳s article “Trajectories of cortical thickness maturation in normal brain development – the importance of quality control procedures” (Ducharme et al., 2015 [1].

  14. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Directory of Open Access Journals (Sweden)

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  15. CT and MRI normal findings

    International Nuclear Information System (INIS)

    Moeller, T.B.; Reif, E.

    1998-01-01

    This book gives answers to questions frequently heard especially from trainees and doctors not specialising in the field of radiology: Is that a normal finding? How do I decide? What are the objective criteria? The information presented is three-fold. The normal findings of the usual CT and MRI examinations are shown with high-quality pictures serving as a reference, with inscribed important additional information on measures, angles and other criteria describing the normal conditions. These criteria are further explained and evaluated in accompanying texts which also teach the systematic approach for individual picture analysis, and include a check list of major aspects, as a didactic guide for learning. The book is primarily intended for students, radiographers, radiology trainees and doctors from other medical fields, but radiology specialists will also find useful details of help in special cases. (orig./CB) [de

  16. On the projective normality of Artin-Schreier curves

    Directory of Open Access Journals (Sweden)

    Alberto Ravagnani

    2013-11-01

    Full Text Available In this paper we study the projective normality of certain Artin-Schreier curves Y_f defined over a field F of characteristic p by the equations y^q+y=f(x, q being a power of p and f in F[x] being a polynomial in x of degree m, with (m,p=1. Many Y_f curves are singular and so, to be precise, here we study the projective normality of appropriate projective models of their normalization.

  17. Observational constraint on the interacting dark energy models including the Sandage-Loeb test

    Science.gov (United States)

    Zhang, Ming-Jian; Liu, Wen-Biao

    2014-05-01

    Two types of interacting dark energy models are investigated using the type Ia supernova (SNIa), observational data (OHD), cosmic microwave background shift parameter, and the secular Sandage-Loeb (SL) test. In the investigation, we have used two sets of parameter priors including WMAP-9 and Planck 2013. They have shown some interesting differences. We find that the inclusion of SL test can obviously provide a more stringent constraint on the parameters in both models. For the constant coupling model, the interaction term has been improved to be only a half of the original scale on corresponding errors. Comparing with only SNIa and OHD, we find that the inclusion of the SL test almost reduces the best-fit interaction to zero, which indicates that the higher-redshift observation including the SL test is necessary to track the evolution of the interaction. For the varying coupling model, data with the inclusion of the SL test show that the parameter at C.L. in Planck priors is , where the constant is characteristic for the severity of the coincidence problem. This indicates that the coincidence problem will be less severe. We then reconstruct the interaction , and we find that the best-fit interaction is also negative, similar to the constant coupling model. However, for a high redshift, the interaction generally vanishes at infinity. We also find that the phantom-like dark energy with is favored over the CDM model.

  18. MEMLS3&a: Microwave Emission Model of Layered Snowpacks adapted to include backscattering

    Directory of Open Access Journals (Sweden)

    M. Proksch

    2015-08-01

    Full Text Available The Microwave Emission Model of Layered Snowpacks (MEMLS was originally developed for microwave emissions of snowpacks in the frequency range 5–100 GHz. It is based on six-flux theory to describe radiative transfer in snow including absorption, multiple volume scattering, radiation trapping due to internal reflection and a combination of coherent and incoherent superposition of reflections between horizontal layer interfaces. Here we introduce MEMLS3&a, an extension of MEMLS, which includes a backscatter model for active microwave remote sensing of snow. The reflectivity is decomposed into diffuse and specular components. Slight undulations of the snow surface are taken into account. The treatment of like- and cross-polarization is accomplished by an empirical splitting parameter q. MEMLS3&a (as well as MEMLS is set up in a way that snow input parameters can be derived by objective measurement methods which avoid fitting procedures of the scattering efficiency of snow, required by several other models. For the validation of the model we have used a combination of active and passive measurements from the NoSREx (Nordic Snow Radar Experiment campaign in Sodankylä, Finland. We find a reasonable agreement between the measurements and simulations, subject to uncertainties in hitherto unmeasured input parameters of the backscatter model. The model is written in Matlab and the code is publicly available for download through the following website: http://www.iapmw.unibe.ch/research/projects/snowtools/memls.html.

  19. Dipole model analysis of highest precision HERA data, including very low Q"2's

    International Nuclear Information System (INIS)

    Luszczak, A.; Kowalski, H.

    2016-12-01

    We analyse, within a dipole model, the final, inclusive HERA DIS cross section data in the low χ region, using fully correlated errors. We show, that these highest precision data are very well described within the dipole model framework starting from Q"2 values of 3.5 GeV"2 to the highest values of Q"2=250 GeV"2. To analyze the saturation effects we evaluated the data including also the very low 0.35< Q"2 GeV"2 region. The fits including this region show a preference of the saturation ansatz.

  20. Clues to γ-secretase, huntingtin and Hirano body normal function using the model organism Dictyostelium discoideum

    Directory of Open Access Journals (Sweden)

    Myre Michael A

    2012-04-01

    Full Text Available Abstract Many neurodegenerative disorders, although related by their destruction of brain function, display remarkable cellular and/or regional pathogenic specificity likely due to a deregulated functionality of the mutant protein. However, neurodegenerative disease genes, for example huntingtin (HTT, the ataxins, the presenilins (PSEN1/PSEN2 are not simply localized to neurons but are ubiquitously expressed throughout peripheral tissues; it is therefore paramount to properly understand the earliest precipitating events leading to neuronal pathogenesis to develop effective long-term therapies. This means, in no unequivocal terms, it is crucial to understand the gene's normal function. Unfortunately, many genes are often essential for embryogenesis which precludes their study in whole organisms. This is true for HTT, the β-amyloid precursor protein (APP and presenilins, responsible for early onset Alzheimer's disease (AD. To better understand neurological disease in humans, many lower and higher eukaryotic models have been established. So the question arises: how reasonable is the use of organisms to study neurological disorders when the model of choice does not contain neurons? Here we will review the surprising, and novel emerging use of the model organism Dictyostelium discoideum, a species of soil-living amoeba, as a valuable biomedical tool to study the normal function of neurodegenerative genes. Historically, the evidence on the usefulness of simple organisms to understand the etiology of cellular pathology cannot be denied. But using an organism without a central nervous system to understand diseases of the brain? We will first introduce the life cycle of Dictyostelium, the presence of many disease genes in the genome and how it has provided unique opportunities to identify mechanisms of disease involving actin pathologies, mitochondrial disease, human lysosomal and trafficking disorders and host-pathogen interactions. Secondly, I will

  1. An Algorithm for Higher Order Hopf Normal Forms

    Directory of Open Access Journals (Sweden)

    A.Y.T. Leung

    1995-01-01

    Full Text Available Normal form theory is important for studying the qualitative behavior of nonlinear oscillators. In some cases, higher order normal forms are required to understand the dynamic behavior near an equilibrium or a periodic orbit. However, the computation of high-order normal forms is usually quite complicated. This article provides an explicit formula for the normalization of nonlinear differential equations. The higher order normal form is given explicitly. Illustrative examples include a cubic system, a quadratic system and a Duffing–Van der Pol system. We use exact arithmetic and find that the undamped Duffing equation can be represented by an exact polynomial differential amplitude equation in a finite number of terms.

  2. High-frequency ultrasound measurements of the normal ciliary body and iris.

    Science.gov (United States)

    Garcia, Julian P S; Spielberg, Leigh; Finger, Paul T

    2011-01-01

    To determine the normal ultrasonographic thickness of the iris and ciliary body. This prospective 35-MHz ultrasonographic study included 80 normal eyes of 40 healthy volunteers. The images were obtained at the 12-, 3-, 6-, and 9-o'clock radial meridians, measured at three locations along the radial length of the iris and at the thickest section of the ciliary body. Mixed model was used to estimate eye site-adjusted means and standard errors and to test the statistical difference of adjusted results. Parameters included mean thickness, standard deviation, and range. Mean thicknesses at the iris root, midway along the radial length of the iris, and at the juxtapupillary margin were 0.4 ± 0.1, 0.5 ± 0.1, and 0.6 ± 0.1 mm, respectively. Those of the ciliary body, ciliary processes, and ciliary body + ciliary processes were 0.7 ± 0.1, 0.6 ± 0.1, and 1.3 ± 0.2 mm, respectively. This study provides standard, normative thickness data for the iris and ciliary body in healthy adults using ultrasonographic imaging. Copyright 2011, SLACK Incorporated.

  3. A speech production model including the nasal Cavity

    DEFF Research Database (Denmark)

    Olesen, Morten

    In order to obtain articulatory analysis of speech production the model is improved. the standard model, as used in LPC analysis, to a large extent only models the acoustic properties of speech signal as opposed to articulatory modelling of the speech production. In spite of this the LPC model...... is by far the most widely used model in speech technology....

  4. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    Science.gov (United States)

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  5. Earth's Outer Core Properties Estimated Using Bayesian Inversion of Normal Mode Eigenfrequencies

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.

    2016-12-01

    The outer core is arguably Earth's most dynamic region, and consists of an iron-nickel liquid with an unknown combination of lighter alloying elements. Frequencies of Earth's normal modes provide the strongest constraints on the radial profiles of compressional wavespeed, VΦ, and density, ρ, in the outer core. Recent great earthquakes have yielded new normal mode measurements; however, mineral physics experiments and calculations are often compared to the Preliminary reference Earth model (PREM), which is 35 years old and does not provide uncertainties. Here we investigate the thermo-elastic properties of the outer core using Earth's free oscillations and a Bayesian framework. To estimate radial structure of the outer core and its uncertainties, we choose to exploit recent datasets of normal mode centre frequencies. Under the self-coupling approximation, centre frequencies are unaffected by lateral heterogeneities in the Earth, for example in the mantle. Normal modes are sensitive to both VΦ and ρ in the outer core, with each mode's specific sensitivity depending on its eigenfunctions. We include a priori bounds on outer core models that ensure compatibility with measurements of mass and moment of inertia. We use Bayesian Monte Carlo Markov Chain techniques to explore different choices in parameterizing the outer core, each of which represents different a priori constraints. We test how results vary (1) assuming a smooth polynomial parametrization, (2) allowing for structure close to the outer core's boundaries, (3) assuming an Equation-of-State and adiabaticity and inverting directly for thermo-elastic parameters. In the second approach we recognize that the outer core may have distinct regions close to the core-mantle and inner core boundaries and investigate models which parameterize the well mixed outer core separately from these two layers. In the last approach we seek to map the uncertainties directly into thermo-elastic parameters including the bulk

  6. Multiple imputation in the presence of non-normal data.

    Science.gov (United States)

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Including an ocean carbon cycle model into iLOVECLIM (v1.0)

    NARCIS (Netherlands)

    Bouttes, N.; Roche, D.M.V.A.P.; Mariotti, V.; Bopp, L.

    2015-01-01

    The atmospheric carbon dioxide concentration plays a crucial role in the radiative balance and as such has a strong influence on the evolution of climate. Because of the numerous interactions between climate and the carbon cycle, it is necessary to include a model of the carbon cycle within a

  8. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  9. CNN-based ranking for biomedical entity normalization.

    Science.gov (United States)

    Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong

    2017-10-03

    Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.

  10. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    Science.gov (United States)

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  11. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    International Nuclear Information System (INIS)

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  12. TumorBoost: Normalization of allele-specific tumor copy numbers from a single pair of tumor-normal genotyping microarrays

    Directory of Open Access Journals (Sweden)

    Neuvial Pierre

    2010-05-01

    Full Text Available Abstract Background High-throughput genotyping microarrays assess both total DNA copy number and allelic composition, which makes them a tool of choice for copy number studies in cancer, including total copy number and loss of heterozygosity (LOH analyses. Even after state of the art preprocessing methods, allelic signal estimates from genotyping arrays still suffer from systematic effects that make them difficult to use effectively for such downstream analyses. Results We propose a method, TumorBoost, for normalizing allelic estimates of one tumor sample based on estimates from a single matched normal. The method applies to any paired tumor-normal estimates from any microarray-based technology, combined with any preprocessing method. We demonstrate that it increases the signal-to-noise ratio of allelic signals, making it significantly easier to detect allelic imbalances. Conclusions TumorBoost increases the power to detect somatic copy-number events (including copy-neutral LOH in the tumor from allelic signals of Affymetrix or Illumina origin. We also conclude that high-precision allelic estimates can be obtained from a single pair of tumor-normal hybridizations, if TumorBoost is combined with single-array preprocessing methods such as (allele-specific CRMA v2 for Affymetrix or BeadStudio's (proprietary XY-normalization method for Illumina. A bounded-memory implementation is available in the open-source and cross-platform R package aroma.cn, which is part of the Aroma Project (http://www.aroma-project.org/.

  13. Currents, HF Radio-derived, Ano Nuevo, Normal Model, Meridional, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the meridional component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal...

  14. Currents, HF Radio-derived, Monterey Bay, Normal Model, Meridional, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the meridional component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal...

  15. COBE DMR-normalized open inflation cold dark matter cosmogony

    Science.gov (United States)

    Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.

    1995-01-01

    A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.

  16. Refitting density dependent relativistic model parameters including Center-of-Mass corrections

    International Nuclear Information System (INIS)

    Avancini, Sidney S.; Marinelli, Jose R.; Carlson, Brett Vern

    2011-01-01

    Full text: Relativistic mean field models have become a standard approach for precise nuclear structure calculations. After the seminal work of Serot and Walecka, which introduced a model Lagrangian density where the nucleons interact through the exchange of scalar and vector mesons, several models were obtained through its generalization, including other meson degrees of freedom, non-linear meson interactions, meson-meson interactions, etc. More recently density dependent coupling constants were incorporated into the Walecka-like models, which are then extensively used. In particular, for these models a connection with the density functional theory can be established. Due to the inherent difficulties presented by field theoretical models, only the mean field approximation is used for the solution of these models. In order to calculate finite nuclei properties in the mean field approximation, a reference set has to be fixed and therefore the translational symmetry is violated. It is well known that in such case spurious effects due to the center-of-mass (COM) motion are present, which are more pronounced for light nuclei. In a previous work we have proposed a technique based on the Pierls-Yoccoz projection operator applied to the mean-field relativistic solution, in order to project out spurious COM contributions. In this work we obtain a new fitting for the density dependent parameters of a density dependent hadronic model, taking into account the COM corrections. Our fitting is obtained taking into account the charge radii and binding energies for He 4 , O 16 , Ca 40 , Ca 48 , Ni 56 , Ni 68 , Sn 100 , Sn 132 and Pb 208 . We show that the nuclear observables calculated using our fit are of a quality comparable to others that can be found in the literature, with the advantage that now a translational invariant many-body wave function is at our disposal. (author)

  17. A numerical model including PID control of a multizone crystal growth furnace

    Science.gov (United States)

    Panzarella, Charles H.; Kassemi, Mohammad

    1992-01-01

    This paper presents a 2D axisymmetric combined conduction and radiation model of a multizone crystal growth furnace. The model is based on a programmable multizone furnace (PMZF) designed and built at NASA Lewis Research Center for growing high quality semiconductor crystals. A novel feature of this model is a control algorithm which automatically adjusts the power in any number of independently controlled heaters to establish the desired crystal temperatures in the furnace model. The control algorithm eliminates the need for numerous trial and error runs previously required to obtain the same results. The finite element code, FIDAP, used to develop the furnace model, was modified to directly incorporate the control algorithm. This algorithm, which presently uses PID control, and the associated heat transfer model are briefly discussed. Together, they have been used to predict the heater power distributions for a variety of furnace configurations and desired temperature profiles. Examples are included to demonstrate the effectiveness of the PID controlled model in establishing isothermal, Bridgman, and other complicated temperature profies in the sample. Finally, an example is given to show how the algorithm can be used to change the desired profile with time according to a prescribed temperature-time evolution.

  18. Pricing FX Options in the Heston/CIR Jump-Diffusion Model with Log-Normal and Log-Uniform Jump Amplitudes

    Directory of Open Access Journals (Sweden)

    Rehez Ahlip

    2015-01-01

    model for the exchange rate with log-normal jump amplitudes and the volatility model with log-uniformly distributed jump amplitudes. We assume that the domestic and foreign stochastic interest rates are governed by the CIR dynamics. The instantaneous volatility is correlated with the dynamics of the exchange rate return, whereas the domestic and foreign short-term rates are assumed to be independent of the dynamics of the exchange rate and its volatility. The main result furnishes a semianalytical formula for the price of the foreign exchange European call option.

  19. MR imaging of the ankle: Normal variants

    International Nuclear Information System (INIS)

    Noto, A.M.; Cheung, Y.; Rosenberg, Z.S.; Norman, A.; Leeds, N.E.

    1987-01-01

    Thirty asymptomatic ankles were studied with high-resolution surface coil MR imaging. The thirty ankles were reviewed for identification or normal structures. The MR appearance of the deltoid and posterior to talo-fibular ligaments, peroneous brevis and longus tendons, and posterior aspect of the tibial-talar joint demonstrated several normal variants not previously described. These should not be misinterpreted as pathologic processes. The specific findings included (1) cortical irregularity of the posterior tibial-talar joint in 27 of 30 cases which should not be mistaken for osteonecrois; (2) normal posterior talo-fibular ligament with irregular and frayed inhomogeneity, which represents a normal variant in seven of ten cases; and (3) fluid in the shared peroneal tendons sheath which may be confused for a longitudinal tendon tear in three of 30 cases. Ankle imaging with the use of MR is still a relatively new procedure. Further investigation is needed to better define normal anatomy as well as normal variants. The authors described several structures that normally present with variable MR imaging appearances. This is clinically significant in order to maintain a high sensitivity and specificity in MR imaging interpretation

  20. Normalized modes at selected points without normalization

    Science.gov (United States)

    Kausel, Eduardo

    2018-04-01

    As every textbook on linear algebra demonstrates, the eigenvectors for the general eigenvalue problem | K - λM | = 0 involving two real, symmetric, positive definite matrices K , M satisfy some well-defined orthogonality conditions. Equally well-known is the fact that those eigenvectors can be normalized so that their modal mass μ =ϕT Mϕ is unity: it suffices to divide each unscaled mode by the square root of the modal mass. Thus, the normalization is the result of an explicit calculation applied to the modes after they were obtained by some means. However, we show herein that the normalized modes are not merely convenient forms of scaling, but that they are actually intrinsic properties of the pair of matrices K , M, that is, the matrices already "know" about normalization even before the modes have been obtained. This means that we can obtain individual components of the normalized modes directly from the eigenvalue problem, and without needing to obtain either all of the modes or for that matter, any one complete mode. These results are achieved by means of the residue theorem of operational calculus, a finding that is rather remarkable inasmuch as the residues themselves do not make use of any orthogonality conditions or normalization in the first place. It appears that this obscure property connecting the general eigenvalue problem of modal analysis with the residue theorem of operational calculus may have been overlooked up until now, but which has in turn interesting theoretical implications.Á

  1. Safe distance car-following model including backward-looking and its stability analysis

    Science.gov (United States)

    Yang, Da; Jin, Peter Jing; Pu, Yun; Ran, Bin

    2013-03-01

    The focus of this paper is the car-following behavior including backward-looking, simply called the bi-directional looking car-following behavior. This study is motivated by the potential changes of the physical properties of traffic flow caused by the fast developing intelligent transportation system (ITS), especially the new connected vehicle technology. Existing studies on this topic focused on general motors (GM) models and optimal velocity (OV) models. The safe distance car-following model, Gipps' model, which is more widely used in practice have not drawn too much attention in the bi-directional looking context. This paper explores the property of the bi-directional looking extension of Gipps' safe distance model. The stability condition of the proposed model is derived using the linear stability theory and is verified using numerical simulations. The impacts of the driver and vehicle characteristics appeared in the proposed model on the traffic flow stability are also investigated. It is found that taking into account the backward-looking effect in car-following has three types of effect on traffic flow: stabilizing, destabilizing and producing non-physical phenomenon. This conclusion is more sophisticated than the study results based on the OV bi-directional looking car-following models. Moreover, the drivers who have the smaller reaction time or the larger additional delay and think the other vehicles have larger maximum decelerations can stabilize traffic flow.

  2. The impact of Roux-en-Y gastric bypass surgery on normal metabolism in a porcine model.

    Directory of Open Access Journals (Sweden)

    Andreas Lindqvist

    Full Text Available A growing body of literature on Roux-en-Y gastric bypass surgery (RYGB has generated inconclusive results on the mechanism underlying the beneficial effects on weight loss and glycaemia, partially due to the problems of designing clinical studies with the appropriate controls. Moreover, RYGB is only performed in obese individuals, in whom metabolism is perturbed and not completely understood.In an attempt to isolate the effects of RYGB and its effects on normal metabolism, we investigated the effect of RYGB in lean pigs, using sham-operated pair-fed pigs as controls. Two weeks post-surgery, pigs were subjected to an intravenous glucose tolerance test (IVGTT and circulating metabolites, hormones and lipids measured. Bile acid composition was profiled after extraction from blood, faeces and the gallbladder.A similar weight development in both groups of pigs validated our experimental model. Despite similar changes in fasting insulin, RYGB-pigs had lower fasting glucose levels. During an IVGTT RYGB-pigs had higher insulin and lower glucose levels. VLDL and IDL were lower in RYGB- than in sham-pigs. RYGB-pigs had increased levels of most amino acids, including branched-chain amino acids, but these were more efficiently suppressed by glucose. Levels of bile acids in the gallbladder were higher, whereas plasma and faecal bile acid levels were lower in RYGB- than in sham-pigs.In a lean model RYGB caused lower plasma lipid and bile acid levels, which were compensated for by increased plasma amino acids, suggesting a switch from lipid to protein metabolism during fasting in the immediate postoperative period.

  3. SU-F-T-681: Does the Biophysical Modeling for Immunological Aspects in Radiotherapy Precisely Predict Tumor and Normal Tissue Responses?

    Energy Technology Data Exchange (ETDEWEB)

    Oita, M [Graduate School of Health Sciences, Okayama University, Okayama, Okayama (Japan); Nakata, K [Tokyo University of Science, Noda, Chiba (Japan); Sasaki, M [Tokushima University Hospital, Tokushima, Tokushima (Japan); Tominaga, M [Tokushima University Graduate School, Tokushima, Tokushima (Japan); Aoyama, H [Okayama University Hospital, Okayama, Okayama (Japan); Honda, H [Ehime University Hospital, Tohon, Ehime (Japan); Uto, Y [Tokushima University, Tokushima, Tokushima (Japan)

    2016-06-15

    Purpose: Recent advances in immunotherapy make possible to combine with radiotherapy. The aim of this study was to assess the TCP/NTCP model with immunological aspects including stochastic distribution as intercellular uncertainties. Methods: In the clinical treatment planning system (Eclipse ver.11.0, Varian medical systems, US), biological parameters such as α/β, D50, γ, n, m, TD50 including repair parameters (bi-exponential repair) can be set as any given values to calculate the TCP/NTCP. Using a prostate cancer patient data with VMAT commissioned as a 6-MV photon beam of Novalis-Tx (BrainLab, US) in clinical use, the fraction schedule were hypothesized as 70–78Gy/35–39fr, 72–81Gy/40–45fr, 52.5–66Gy/16–22fr, 35–40Gy/5fr of 5–7 fractions in a week. By use of stochastic biological model applying for Gaussian distribution, the effects of the TCP/NTCP variation of repair parameters of the immune system as well as the intercellular uncertainty of tumor and normal tissues have been evaluated. Results: As respect to the difference of the α/β, the changes of the TCP/NTCP were increased in hypo-fraction regimens. The difference between the values of n and m affect the variation of the NTCP with the fraction schedules, independently. The elongation of repair half-time (long) increased the TCP/NTCP twice or much higher in the case of hypo-fraction scheme. For tumor, the repopulation parameters such as Tpot and Tstart, which is immunologically working to the tumor, improved TCP. Conclusion: Compared to default fixed value, which has affected by the probability of cell death and cure, hypo-fractionation schemes seemed to have advantages for the variations of the values of m. The possibility of an increase of the α/β or TD50 and repair parameters in tumor and normal tissue by immunological aspects were highly expected. For more precise prediction, treatment planning systems should be incorporated the complicated biological optimization in clinical

  4. Modelling of the Peltier effect in magnetic multilayers

    NARCIS (Netherlands)

    Juarez-Acosta, I.; Olivares-Robles, M.A.; Bosu, S.; Sakuraba, Y.; Kubota, T.; Takahashi, S.; Takanashi, K.; Bauer, G.E.W.

    2016-01-01

    We model the charge, spin, and heat currents in ferromagnetic metal?normal metal?normal metal trilayer structures in the two current model, taking into account bulk and interface thermoelectric properties as well as Joule heating. The results include the temperature distribution as well as

  5. SPheno 3.1: extensions including flavour, CP-phases and models beyond the MSSM

    Science.gov (United States)

    Porod, W.; Staub, F.

    2012-11-01

    We describe recent extensions of the program SPhenoincluding flavour aspects, CP-phases, R-parity violation and low energy observables. In case of flavour mixing all masses of supersymmetric particles are calculated including the complete flavour structure and all possible CP-phases at the 1-loop level. We give details on implemented seesaw models, low energy observables and the corresponding extension of the SUSY Les Houches Accord. Moreover, we comment on the possibilities to include MSSM extensions in SPheno. Catalogue identifier: ADRV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRV_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154062 No. of bytes in distributed program, including test data, etc.: 1336037 Distribution format: tar.gz Programming language: Fortran95. Computer: PC running under Linux, should run in every Unix environment. Operating system: Linux, Unix. Classification: 11.6. Catalogue identifier of previous version: ADRV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 153(2003)275 Does the new version supersede the previous version?: Yes Nature of problem: The first issue is the determination of the masses and couplings of supersymmetric particles in various supersymmetric models, the R-parity conserved MSSM with generation mixing and including CP-violating phases, various seesaw extensions of the MSSM and the MSSM with bilinear R-parity breaking. Low energy data on Standard Model fermion masses, gauge couplings and electroweak gauge boson masses serve as constraints. Radiative corrections from supersymmetric particles to these inputs must be calculated. Theoretical constraints on the soft SUSY breaking parameters from a high scale theory are imposed and the parameters at the electroweak scale are obtained from the

  6. Strength of Gamma Rhythm Depends on Normalization

    Science.gov (United States)

    Ray, Supratim; Ni, Amy M.; Maunsell, John H. R.

    2013-01-01

    Neuronal assemblies often exhibit stimulus-induced rhythmic activity in the gamma range (30–80 Hz), whose magnitude depends on the attentional load. This has led to the suggestion that gamma rhythms form dynamic communication channels across cortical areas processing the features of behaviorally relevant stimuli. Recently, attention has been linked to a normalization mechanism, in which the response of a neuron is suppressed (normalized) by the overall activity of a large pool of neighboring neurons. In this model, attention increases the excitatory drive received by the neuron, which in turn also increases the strength of normalization, thereby changing the balance of excitation and inhibition. Recent studies have shown that gamma power also depends on such excitatory–inhibitory interactions. Could modulation in gamma power during an attention task be a reflection of the changes in the underlying excitation–inhibition interactions? By manipulating the normalization strength independent of attentional load in macaque monkeys, we show that gamma power increases with increasing normalization, even when the attentional load is fixed. Further, manipulations of attention that increase normalization increase gamma power, even when they decrease the firing rate. Thus, gamma rhythms could be a reflection of changes in the relative strengths of excitation and normalization rather than playing a functional role in communication or control. PMID:23393427

  7. Cucker-Smale model with normalized communication weights and time delay

    KAUST Repository

    Choi, Young-Pil; Haskovec, Jan

    2017-01-01

    We study a Cucker-Smale-type system with time delay in which agents interact with each other through normalized communication weights. We construct a Lyapunov functional for the system and provide sufficient conditions for asymptotic flocking, i

  8. Normal Tissue Complication Probability Modeling of Acute Hematologic Toxicity in Cervical Cancer Patients Treated With Chemoradiotherapy

    International Nuclear Information System (INIS)

    Rose, Brent S.; Aydogan, Bulent; Liang, Yun; Yeginer, Mete; Hasselle, Michael D.; Dandekar, Virag; Bafana, Rounak; Yashar, Catheryn M.; Mundt, Arno J.; Roeske, John C.; Mell, Loren K.

    2011-01-01

    Purpose: To test the hypothesis that increased pelvic bone marrow (BM) irradiation is associated with increased hematologic toxicity (HT) in cervical cancer patients undergoing chemoradiotherapy and to develop a normal tissue complication probability (NTCP) model for HT. Methods and Materials: We tested associations between hematologic nadirs during chemoradiotherapy and the volume of BM receiving ≥10 and 20 Gy (V 10 and V 20 ) using a previously developed linear regression model. The validation cohort consisted of 44 cervical cancer patients treated with concurrent cisplatin and pelvic radiotherapy. Subsequently, these data were pooled with data from 37 identically treated patients from a previous study, forming a cohort of 81 patients for normal tissue complication probability analysis. Generalized linear modeling was used to test associations between hematologic nadirs and dosimetric parameters, adjusting for body mass index. Receiver operating characteristic curves were used to derive optimal dosimetric planning constraints. Results: In the validation cohort, significant negative correlations were observed between white blood cell count nadir and V 10 (regression coefficient (β) = -0.060, p = 0.009) and V 20 (β = -0.044, p = 0.010). In the combined cohort, the (adjusted) β estimates for log (white blood cell) vs. V 10 and V 20 were as follows: -0.022 (p = 0.025) and -0.021 (p = 0.002), respectively. Patients with V 10 ≥ 95% were more likely to experience Grade ≥3 leukopenia (68.8% vs. 24.6%, p 20 > 76% (57.7% vs. 21.8%, p = 0.001). Conclusions: These findings support the hypothesis that HT increases with increasing pelvic BM volume irradiated. Efforts to maintain V 10 20 < 76% may reduce HT.

  9. Enhanced battery model including temperature effects

    NARCIS (Netherlands)

    Rosca, B.; Wilkins, S.

    2013-01-01

    Within electric and hybrid vehicles, batteries are used to provide/buffer the energy required for driving. However, battery performance varies throughout the temperature range specific to automotive applications, and as such, models that describe this behaviour are required. This paper presents a

  10. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    Science.gov (United States)

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.

  11. Models of epidemics: when contact repetition and clustering should be included

    Directory of Open Access Journals (Sweden)

    Scholz Roland W

    2009-06-01

    Full Text Available Abstract Background The spread of infectious disease is determined by biological factors, e.g. the duration of the infectious period, and social factors, e.g. the arrangement of potentially contagious contacts. Repetitiveness and clustering of contacts are known to be relevant factors influencing the transmission of droplet or contact transmitted diseases. However, we do not yet completely know under what conditions repetitiveness and clustering should be included for realistically modelling disease spread. Methods We compare two different types of individual-based models: One assumes random mixing without repetition of contacts, whereas the other assumes that the same contacts repeat day-by-day. The latter exists in two variants, with and without clustering. We systematically test and compare how the total size of an outbreak differs between these model types depending on the key parameters transmission probability, number of contacts per day, duration of the infectious period, different levels of clustering and varying proportions of repetitive contacts. Results The simulation runs under different parameter constellations provide the following results: The difference between both model types is highest for low numbers of contacts per day and low transmission probabilities. The number of contacts and the transmission probability have a higher influence on this difference than the duration of the infectious period. Even when only minor parts of the daily contacts are repetitive and clustered can there be relevant differences compared to a purely random mixing model. Conclusion We show that random mixing models provide acceptable estimates of the total outbreak size if the number of contacts per day is high or if the per-contact transmission probability is high, as seen in typical childhood diseases such as measles. In the case of very short infectious periods, for instance, as in Norovirus, models assuming repeating contacts will also behave

  12. Hippocampal proteomics defines pathways associated with memory decline and resilience in normal aging and Alzheimer's disease mouse models.

    Science.gov (United States)

    Neuner, Sarah M; Wilmott, Lynda A; Hoffmann, Brian R; Mozhui, Khyobeni; Kaczorowski, Catherine C

    2017-03-30

    Alzheimer's disease (AD), the most common form of dementia in the elderly, has no cure. Thus, the identification of key molecular mediators of cognitive decline in AD remains a top priority. As aging is the most significant risk factor for AD, the goal of this study was to identify altered proteins and pathways associated with the development of normal aging and AD memory deficits, and identify unique proteins and pathways that may contribute to AD-specific symptoms. We used contextual fear conditioning to diagnose 8-month-old 5XFAD and non-transgenic (Ntg) mice as having either intact or impaired memory, followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to quantify hippocampal membrane proteins across groups. Subsequent analysis detected 113 proteins differentially expressed relative to memory status (intact vs impaired) in Ntg mice and 103 proteins in 5XFAD mice. Thirty-six proteins, including several involved in neuronal excitability and synaptic plasticity (e.g., GRIA1, GRM3, and SYN1), were altered in both normal aging and AD. Pathway analysis highlighted HDAC4 as a regulator of observed protein changes in both genotypes and identified the REST epigenetic regulatory pathway and G i intracellular signaling as AD-specific pathways involved in regulating the onset of memory deficits. Comparing the hippocampal membrane proteome of Ntg versus AD, regardless of cognitive status, identified 138 differentially expressed proteins, including confirmatory proteins APOE and CLU. Overall, we provide a novel list of putative targets and pathways with therapeutic potential, including a set of proteins associated with cognitive status in normal aging mice or gene mutations that cause AD. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Radiogenomics: predicting clinical normal tissue radiosensitivity

    DEFF Research Database (Denmark)

    Alsner, Jan

    2006-01-01

    Studies on the genetic basis of normal tissue radiosensitivity, or  'radiogenomics', aims at predicting clinical radiosensitivity and optimize treatment from individual genetic profiles. Several studies have now reported links between variations in certain genes related to the biological response...... to radiation injury and risk of normal tissue morbidity in cancer patients treated with radiotherapy. However, after these initial association studies including few genes, we are still far from being able to predict clinical radiosensitivity on an individual level. Recent data from our own studies on risk...

  14. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  15. Developing Visualization Support System for Teaching/Learning Database Normalization

    Science.gov (United States)

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  16. MATHEMATICAL ANALYSIS OF DENTAL ARCH OF CHILDREN IN NORMAL OCCLUSION: A LITERATURE REVIEW

    Directory of Open Access Journals (Sweden)

    M. Abu-Hussein DDS, MScD, MSc, DPD

    2012-03-01

    Full Text Available AIM. This paper is an attempt to compare and analyze the various mathematical models for defining the dental arch curvature of children in normal occlusion based upon a review of available literature. Background. While various studies have touched upon ways to cure or prevent dental diseases and upon surgical ways for teeth reconstitution to correct teeth anomalies during childhood, a substantial literature also exists, attempting to mathematically define the dental arch of children in normal occlusion. This paper reviews these dental studies and compares them analytically. Method. The paper compares the different mathematical approaches, highlights the basic assumptions behind each model, underscores the relevancy and applicability of the same, and also lists applicable mathematical formulae. Results. Each model has been found applicable to specific research conditions, as a universal mathematical model for describing the human dental arch still eludes satisfactory definition. The models necessarily need to include the features of the dental arch, such as shape, spacing between teeth and symmetry or asymmetry, but they also need substantial improvement. Conclusions. While the paper shows that the existing models are inadequate in properly defining the human dental arch, it also acknowledges that future research based on modern imaging techniques and computeraided simulation could well succeed in deriving an allinclusive definition for the human dental curve till now eluding the experts.

  17. Integrating normal and abnormal personality structure: a proposal for DSM-V.

    Science.gov (United States)

    Widiger, Thomas A

    2011-06-01

    The personality disorders section of the American Psychiatric Association's fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V) is currently being developed. The purpose of the current paper is to encourage the authors of DSM-V to integrate normal and abnormal personality structure within a common, integrative model, and to suggest that the optimal choice for such an integration would be the five-factor model (FFM) of general personality structure. A proposal for the classification of personality disorder from the perspective of the FFM is provided. Discussed as well are implications and issues associated with an FFM of personality disorder, including validity, coverage, feasibility, clinical utility, and treatment implications.

  18. Lithium control during normal operation

    International Nuclear Information System (INIS)

    Suryanarayan, S.; Jain, D.

    2010-01-01

    Periodic increases in lithium (Li) concentrations in the primary heat transport (PHT) system during normal operation are a generic problem at CANDU® stations. Lithiated mixed bed ion exchange resins are used at stations for pH control in the PHT system. Typically tight chemistry controls including Li concentrations are maintained in the PHT water. The reason for the Li increases during normal operation at CANDU stations such as Pickering was not fully understood. In order to address this issue a two pronged approach was employed. Firstly, PNGS-A data and information from other available sources was reviewed in an effort to identify possible factors that may contribute to the observed Li variations. Secondly, experimental studies were carried out to assess the importance of these factors in order to establish reasons for Li increases during normal operation. Based on the results of these studies, plausible mechanisms/reasons for Li increases have been identified and recommendations made for proactive control of Li concentrations in the PHT system. (author)

  19. Thermal modelling of normal distributed nanoparticles through thickness in an inorganic material matrix

    Science.gov (United States)

    Latré, S.; Desplentere, F.; De Pooter, S.; Seveno, D.

    2017-10-01

    Nanoscale materials showing superior thermal properties have raised the interest of the building industry. By adding these materials to conventional construction materials, it is possible to decrease the total thermal conductivity by almost one order of magnitude. This conductivity is mainly influenced by the dispersion quality within the matrix material. At the industrial scale, the main challenge is to control this dispersion to reduce or even eliminate thermal bridges. This allows to reach an industrially relevant process to balance out the high material cost and their superior thermal insulation properties. Therefore, a methodology is required to measure and describe these nanoscale distributions within the inorganic matrix material. These distributions are either random or normally distributed through thickness within the matrix material. We show that the influence of these distributions is meaningful and modifies the thermal conductivity of the building material. Hence, this strategy will generate a thermal model allowing to predict the thermal behavior of the nanoscale particles and their distributions. This thermal model will be validated by the hot wire technique. For the moment, a good correlation is found between the numerical results and experimental data for a randomly distributed form of nanoparticles in all directions.

  20. Normal radiographic findings. 4. act. ed.; Roentgennormalbefunde

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, T.B. [Gemeinschaftspraxis fuer Radiologie und Nuklearmedizin, Dillingen (Germany)

    2003-07-01

    This book can serve the reader in three ways: First, it presents normal findings for all radiographic techniques including KM. Important data which are criteria of normal findings are indicated directly in the pictures and are also explained in full text and in summary form. Secondly, it teaches the systematics of interpreting a picture - how to look at it, what structures to regard in what order, and for what to look in particular. Checklists are presented in each case. Thirdly, findings are formulated in accordance with the image analysis procedure. All criteria of normal findings are defined in these formulations, which make them an important didactic element. (orig.)

  1. Measurement of normal auditory ossicles by high-resolusion CT with application of normal criteria to disease cases

    International Nuclear Information System (INIS)

    Hara, Jyoko

    1988-01-01

    The purposes of this study were to define criteria for the normal position of ossicles and to apply them in patients with rhinolaryngologically or pathologically confirmed diseases. Ossicles were measured on high-resolution CT images of 300 middle ears, including 241 normal ears and 59 diseased ears, in a total of 203 subjects. Angles A, B, and C to the baseline between the most lateral margins of bilateral internal auditory canals, and distance ratio b/a were defined as measurement items. Normal angles A, B, and C and distance ratio b/a ranged from 19 deg to 59 deg, 101 deg to 145 deg, 51 deg to 89 deg, and 0.49 to 0.51, respectively. Based on these criteria, all of these items were within the normal range in 30/34 (88.2 %) ears for otitis media and mastoiditis. One or more items showed far abnormal values (standard deviation; more than 3) in 5/7 (71.4 %) ears for cholesteatoma and 4/4 (100 %) ears for external ear anomaly. These normal measurements may aid in evaluating the position of auditory ossicles especially in the case of cholesteatoma and auditory ossicle abnormality. (Namekawa, K.)

  2. Measurement of normal auditory ossicles by high-resolusion CT with application of normal criteria to disease cases

    Energy Technology Data Exchange (ETDEWEB)

    Hara, Jyoko

    1988-09-01

    The purposes of this study were to define criteria for the normal position of ossicles and to apply them in patients with rhinolaryngologically or pathologically confirmed diseases. Ossicles were measured on high-resolution CT images of 300 middle ears, including 241 normal ears and 59 diseased ears, in a total of 203 subjects. Angles A, B, and C to the baseline between the most lateral margins of bilateral internal auditory canals, and distance ratio b/a were defined as measurement items. Normal angles A, B, and C and distance ratio b/a ranged from 19 deg to 59 deg, 101 deg to 145 deg, 51 deg to 89 deg, and 0.49 to 0.51, respectively. Based on these criteria, all of these items were within the normal range in 30/34 (88.2 %) ears for otitis media and mastoiditis. One or more items showed far abnormal values (standard deviation; more than 3) in 5/7 (71.4 %) ears for cholesteatoma and 4/4 (100 %) ears for external ear anomaly. These normal measurements may aid in evaluating the position of auditory ossicles especially in the case of cholesteatoma and auditory ossicle abnormality. (Namekawa, K.).

  3. Identification of support structure damping of a full scale offshore wind turbine in normal operation

    DEFF Research Database (Denmark)

    Koukoura, Christina; Natarajan, Anand; Vesth, Allan

    2015-01-01

    damping from the decaying time series. The Enhanced Frequency Domain Decomposition (EFDD) method was applied to the wind turbine response under ambient excitation, for estimation of the damping in normal operation. The aero-servo-hydro-elastic tool HAWC2 is validated with offshore foundation load...... maxima of an impulse response caused by a boat impact. The result is used in the verification of the non aerodynamic damping in normal operation for low wind speeds. The auto-correlation function technique for damping estimation of a structure under ambient excitation was validated against the identified...... measurements. The model was tuned to the damping values obtained from the boat impact to match the measured loads. Wind turbulence intensity and wave characteristics used in the simulations are based on site measurements. A flexible soil model is included in the analysis. The importance of the correctly...

  4. Study on radioactive release of gaseous and liquid effluents during normal operation of AP1000

    International Nuclear Information System (INIS)

    Gong Quan; Zhou Jing; Liu Yu

    2014-01-01

    The gaseous and liquid radioactive releases of pressurized water reactors plant during normal operation are an important content of environmental impact assessment and play a significant role in the design of nuclear power plant. According to the design characters of AP1OOO radioactive waste management system and the study on the calculation method and the release pathways, the calculation model of the gaseous and liquid radioactive releases during normal operation for AP1OOO are established. Base on the established calculation model and the design parameters of AP1000, the expected value of gaseous and liquid radioactive releases of AP1OOO is calculated. The results of calculation are compared with the limits in GB 6249-2011 and explain the adder that is included tu account for anticipated operational occurrences, providing a reference for environmental impact assessment of pressurized water reactor. (authors)

  5. Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load

    Science.gov (United States)

    Burke, C. J.; Seifritz, E.; Tobler, P. N.

    2017-01-01

    Abstract Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain’s capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations. PMID:28462394

  6. Helicon normal modes in Proto-MPEX

    Science.gov (United States)

    Piotrowicz, P. A.; Caneses, J. F.; Green, D. L.; Goulding, R. H.; Lau, C.; Caughman, J. B. O.; Rapp, J.; Ruzic, D. N.

    2018-05-01

    The Proto-MPEX helicon source has been operating in a high electron density ‘helicon-mode’. Establishing plasma densities and magnetic field strengths under the antenna that allow for the formation of normal modes of the fast-wave are believed to be responsible for the ‘helicon-mode’. A 2D finite-element full-wave model of the helicon antenna on Proto-MPEX is used to identify the fast-wave normal modes responsible for the steady-state electron density profile produced by the source. We also show through the simulation that in the regions of operation in which core power deposition is maximum the slow-wave does not deposit significant power besides directly under the antenna. In the case of a simulation where a normal mode is not excited significant edge power is deposited in the mirror region. ).

  7. Dynamic model of a micro-tubular solid oxide fuel cell stack including an integrated cooling system

    Science.gov (United States)

    Hering, Martin; Brouwer, Jacob; Winkler, Wolfgang

    2017-02-01

    A novel dynamic micro-tubular solid oxide fuel cell (MT-SOFC) and stack model including an integrated cooling system is developed using a quasi three-dimensional, spatially resolved, transient thermodynamic, physical and electrochemical model that accounts for the complex geometrical relations between the cells and cooling-tubes. The modeling approach includes a simplified tubular geometry and stack design including an integrated cooling structure, detailed pressure drop and gas property calculations, the electrical and physical constraints of the stack design that determine the current, as well as control strategies for the temperature. Moreover, an advanced heat transfer balance with detailed radiative heat transfer between the cells and the integrated cooling-tubes, convective heat transfer between the gas flows and the surrounding structures and conductive heat transfer between the solid structures inside of the stack, is included. The detailed model can be used as a design basis for the novel MT-SOFC stack assembly including an integrated cooling system, as well as for the development of a dynamic system control strategy. The evaluated best-case design achieves very high electrical efficiency between around 75 and 55% in the entire power density range between 50 and 550 mW /cm2 due to the novel stack design comprising an integrated cooling structure.

  8. Finite element modeling of contaminant transport in soils including the effect of chemical reactions.

    Science.gov (United States)

    Javadi, A A; Al-Najjar, M M

    2007-05-17

    The movement of chemicals through soils to the groundwater is a major cause of degradation of water resources. In many cases, serious human and stock health implications are associated with this form of pollution. Recent studies have shown that the current models and methods are not able to adequately describe the leaching of nutrients through soils, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. Furthermore, the effect of chemical reactions on the fate and transport of contaminants is not included in many of the existing numerical models for contaminant transport. In this paper a numerical model is presented for simulation of the flow of water and air and contaminant transport through unsaturated soils with the main focus being on the effects of chemical reactions. The governing equations of miscible contaminant transport including advection, dispersion-diffusion and adsorption effects together with the effect of chemical reactions are presented. The mathematical framework and the numerical implementation of the model are described in detail. The model is validated by application to a number of test cases from the literature and is then applied to the simulation of a physical model test involving transport of contaminants in a block of soil with particular reference to the effects of chemical reactions. Comparison of the results of the numerical model with the experimental results shows that the model is capable of predicting the effects of chemical reactions with very high accuracy. The importance of consideration of the effects of chemical reactions is highlighted.

  9. Congenital anomalies and normal skeletal variants

    International Nuclear Information System (INIS)

    Guebert, G.M.; Yochum, T.R.; Rowe, L.J.

    1987-01-01

    Congenital anomalies and normal skeletal variants are a common occurrence in clinical practice. In this chapter a large number of skeletal anomalies of the spine and pelvis are reviewed. Some of the more common skeletal anomalies of the extremities are also presented. The second section of this chapter deals with normal skeletal variants. Some of these variants may simulate certain disease processes. In some instances there are no clear-cut distinctions between skeletal variants and anomalies; therefore, there may be some overlap of material. The congenital anomalies are presented initially with accompanying text, photos, and references, beginning with the skull and proceeding caudally through the spine to then include the pelvis and extremities. The normal skeletal variants section is presented in an anatomical atlas format without text or references

  10. Assessment of average of normals (AON) procedure for outlier-free datasets including qualitative values below limit of detection (LoD): an application within tumor markers such as CA 15-3, CA 125, and CA 19-9.

    Science.gov (United States)

    Usta, Murat; Aral, Hale; Mete Çilingirtürk, Ahmet; Kural, Alev; Topaç, Ibrahim; Semerci, Tuna; Hicri Köseoğlu, Mehmet

    2016-11-01

    Average of normals (AON) is a quality control procedure that is sensitive only to systematic errors that can occur in an analytical process in which patient test results are used. The aim of this study was to develop an alternative model in order to apply the AON quality control procedure to datasets that include qualitative values below limit of detection (LoD). The reported patient test results for tumor markers, such as CA 15-3, CA 125, and CA 19-9, analyzed by two instruments, were retrieved from the information system over a period of 5 months, using the calibrator and control materials with the same lot numbers. The median as a measure of central tendency and the median absolute deviation (MAD) as a measure of dispersion were used for the complementary model of AON quality control procedure. The u bias values, which were determined for the bias component of the measurement uncertainty, were partially linked to the percentages of the daily median values of the test results that fall within the control limits. The results for these tumor markers, in which lower limits of reference intervals are not medically important for clinical diagnosis and management, showed that the AON quality control procedure, using the MAD around the median, can be applied for datasets including qualitative values below LoD.

  11. Including Effects of Water Stress on Dead Organic Matter Decay to a Forest Carbon Model

    Science.gov (United States)

    Kim, H.; Lee, J.; Han, S. H.; Kim, S.; Son, Y.

    2017-12-01

    Decay of dead organic matter is a key process of carbon (C) cycling in forest ecosystems. The change in decay rate depends on temperature sensitivity and moisture conditions. The Forest Biomass and Dead organic matter Carbon (FBDC) model includes a decay sub-model considering temperature sensitivity, yet does not consider moisture conditions as drivers of the decay rate change. This study aimed to improve the FBDC model by including a water stress function to the decay sub-model. Also, soil C sequestration under climate change with the FBDC model including the water stress function was simulated. The water stress functions were determined with data from decomposition study on Quercus variabilis forests and Pinus densiflora forests of Korea, and adjustment parameters of the functions were determined for both species. The water stress functions were based on the ratio of precipitation to potential evapotranspiration. Including the water stress function increased the explained variances of the decay rate by 19% for the Q. variabilis forests and 7% for the P. densiflora forests, respectively. The increase of the explained variances resulted from large difference in temperature range and precipitation range across the decomposition study plots. During the period of experiment, the mean annual temperature range was less than 3°C, while the annual precipitation ranged from 720mm to 1466mm. Application of the water stress functions to the FBDC model constrained increasing trend of temperature sensitivity under climate change, and thus increased the model-estimated soil C sequestration (Mg C ha-1) by 6.6 for the Q. variabilis forests and by 3.1 for the P. densiflora forests, respectively. The addition of water stress functions increased reliability of the decay rate estimation and could contribute to reducing the bias in estimating soil C sequestration under varying moisture condition. Acknowledgement: This study was supported by Korea Forest Service (2017044B10-1719-BB01)

  12. SU-E-T-630: Predictive Modeling of Mortality, Tumor Control, and Normal Tissue Complications After Stereotactic Body Radiotherapy for Stage I Non-Small Cell Lung Cancer

    International Nuclear Information System (INIS)

    Lindsay, WD; Berlind, CG; Gee, JC; Simone, CB

    2015-01-01

    Purpose: While rates of local control have been well characterized after stereotactic body radiotherapy (SBRT) for stage I non-small cell lung cancer (NSCLC), less data are available characterizing survival and normal tissue toxicities, and no validated models exist assessing these parameters after SBRT. We evaluate the reliability of various machine learning techniques when applied to radiation oncology datasets to create predictive models of mortality, tumor control, and normal tissue complications. Methods: A dataset of 204 consecutive patients with stage I non-small cell lung cancer (NSCLC) treated with stereotactic body radiotherapy (SBRT) at the University of Pennsylvania between 2009 and 2013 was used to create predictive models of tumor control, normal tissue complications, and mortality in this IRB-approved study. Nearly 200 data fields of detailed patient- and tumor-specific information, radiotherapy dosimetric measurements, and clinical outcomes data were collected. Predictive models were created for local tumor control, 1- and 3-year overall survival, and nodal failure using 60% of the data (leaving the remainder as a test set). After applying feature selection and dimensionality reduction, nonlinear support vector classification was applied to the resulting features. Models were evaluated for accuracy and area under ROC curve on the 81-patient test set. Results: Models for common events in the dataset (such as mortality at one year) had the highest predictive power (AUC = .67, p < 0.05). For rare occurrences such as radiation pneumonitis and local failure (each occurring in less than 10% of patients), too few events were present to create reliable models. Conclusion: Although this study demonstrates the validity of predictive analytics using information extracted from patient medical records and can most reliably predict for survival after SBRT, larger sample sizes are needed to develop predictive models for normal tissue toxicities and more advanced

  13. SU-E-T-630: Predictive Modeling of Mortality, Tumor Control, and Normal Tissue Complications After Stereotactic Body Radiotherapy for Stage I Non-Small Cell Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lindsay, WD [University of Pennsylvania, Philadelphia, PA (United States); Oncora Medical, LLC, Philadelphia, PA (United States); Berlind, CG [Georgia Institute of Technology, Atlanta, GA (Georgia); Oncora Medical, LLC, Philadelphia, PA (United States); Gee, JC; Simone, CB [University of Pennsylvania, Philadelphia, PA (United States)

    2015-06-15

    Purpose: While rates of local control have been well characterized after stereotactic body radiotherapy (SBRT) for stage I non-small cell lung cancer (NSCLC), less data are available characterizing survival and normal tissue toxicities, and no validated models exist assessing these parameters after SBRT. We evaluate the reliability of various machine learning techniques when applied to radiation oncology datasets to create predictive models of mortality, tumor control, and normal tissue complications. Methods: A dataset of 204 consecutive patients with stage I non-small cell lung cancer (NSCLC) treated with stereotactic body radiotherapy (SBRT) at the University of Pennsylvania between 2009 and 2013 was used to create predictive models of tumor control, normal tissue complications, and mortality in this IRB-approved study. Nearly 200 data fields of detailed patient- and tumor-specific information, radiotherapy dosimetric measurements, and clinical outcomes data were collected. Predictive models were created for local tumor control, 1- and 3-year overall survival, and nodal failure using 60% of the data (leaving the remainder as a test set). After applying feature selection and dimensionality reduction, nonlinear support vector classification was applied to the resulting features. Models were evaluated for accuracy and area under ROC curve on the 81-patient test set. Results: Models for common events in the dataset (such as mortality at one year) had the highest predictive power (AUC = .67, p < 0.05). For rare occurrences such as radiation pneumonitis and local failure (each occurring in less than 10% of patients), too few events were present to create reliable models. Conclusion: Although this study demonstrates the validity of predictive analytics using information extracted from patient medical records and can most reliably predict for survival after SBRT, larger sample sizes are needed to develop predictive models for normal tissue toxicities and more advanced

  14. Expression patterns of DLK1 and INSL3 identify stages of Leydig cell differentiation during normal development and in testicular pathologies, including testicular cancer and Klinefelter syndrome

    DEFF Research Database (Denmark)

    Lottrup, G; Nielsen, J E; Maroun, L L

    2014-01-01

    , and in the majority of LCs, it was mutually exclusive of DLK1. LIMITATIONS, REASONS FOR CAUTION: The number of samples was relatively small and no true normal adult controls were available. True stereology was not used for LC counting, instead LCs were counted in three fields of 0.5 µm(2) surface for each sample...... in adult men with testicular pathologies including testis cancer and Klinefelter syndrome. STUDY FUNDING/COMPETING INTERESTS: This work was funded by Rigshospitalet's research funds, the Danish Cancer Society and Kirsten and Freddy Johansen's foundation. The authors have no conflicts of interest....

  15. A general approach to double-moment normalization of drop size distributions

    Science.gov (United States)

    Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.

    2003-04-01

    Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.

  16. Exploring the experiences of older Chinese adults with comorbidities including diabetes: surmounting these challenges in order to live a normal life

    Science.gov (United States)

    Ho, Hsiu-Yu; Chen, Mei-Hui

    2018-01-01

    Background Many people with diabetes have comorbidities, even multimorbidities, which have a far-reaching impact on the older adults, their family, and society. However, little is known of the experience of older adults living with comorbidities that include diabetes. Aim The aim of this study was to explore the experience of older adults living with comorbidities including diabetes. Methods A qualitative approach was employed. Data were collected from a selected field of 12 patients with diabetes mellitus in a medical center in northern Taiwan. The data were analyzed by Colaizzi’s phenomenological methodology, and four criteria of Lincoln and Guba were used to evaluate the rigor of the study. Results The following 5 themes and 14 subthemes were derived: 1) expecting to heal or reduce the symptoms of the disease (trying to alleviate the distress of symptoms and trusting in health practitioners combining the use of Chinese and Western medicines); 2) comparing complex medical treatments (differences in physician practices and presentation, conditionally adhering to medical treatment, and partnering with medical professionals); 3) inconsistent information (inconsistent health information and inconsistent medical advice); 4) impacting on daily life (activities are limited and hobbies cannot be maintained and psychological distress); and 5) weighing the pros and cons (taking the initiative to deal with issues, limiting activity, adjusting mental outlook and pace of life, developing strategies for individual health regimens, and seeking support). Surmounting these challenges in order to live a normal life was explored. Conclusion This study found that the experience of older adults living with comorbidities including diabetes was similar to that of a single disease, but the extent was greater than a single disease. The biggest difference is that the elderly think that their most serious problem is not diabetes, but rather, the comorbidities causing life limitations

  17. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    Science.gov (United States)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when

  18. Escitalopram and NHT normalized stress-induced anhedonia and molecular neuroadaptations in a mouse model of depression.

    Directory of Open Access Journals (Sweden)

    Or Burstein

    Full Text Available Anhedonia is defined as a diminished ability to obtain pleasure from otherwise positive stimuli. Anxiety and mood disorders have been previously associated with dysregulation of the reward system, with anhedonia as a core element of major depressive disorder (MDD. The aim of the present study was to investigate whether stress-induced anhedonia could be prevented by treatments with escitalopram or novel herbal treatment (NHT in an animal model of depression. Unpredictable chronic mild stress (UCMS was administered for 4 weeks on ICR outbred mice. Following stress exposure, animals were randomly assigned to pharmacological treatment groups (i.e., saline, escitalopram or NHT. Treatments were delivered for 3 weeks. Hedonic tone was examined via ethanol and sucrose preferences. Biological indices pertinent to MDD and anhedonia were assessed: namely, hippocampal brain-derived neurotrophic factor (BDNF and striatal dopamine receptor D2 (Drd2 mRNA expression levels. The results indicate that the UCMS-induced reductions in ethanol or sucrose preferences were normalized by escitalopram or NHT. This implies a resemblance between sucrose and ethanol in their hedonic-eliciting property. On a neurobiological aspect, UCMS-induced reduction in hippocampal BDNF levels was normalized by escitalopram or NHT, while UCMS-induced reduction in striatal Drd2 mRNA levels was normalized solely by NHT. The results accentuate the association of stress and anhedonia, and pinpoint a distinct effect for NHT on striatal Drd2 expression.

  19. Probabilistic modeling using bivariate normal distributions for identification of flow and displacement intervals in longwall overburden

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O.; Goodman, G.V.R. [NIOSH, Pittsburgh, PA (United States). Off Mine Safety & Health Research

    2011-01-15

    Gob gas ventholes (GGV) are used to control methane emissions in longwall mines by capturing it within the overlying fractured strata before it enters the work environment. In order for GGVs to effectively capture more methane and less mine air, the length of the slotted sections and their proximity to top of the coal bed should be designed based on the potential gas sources and their locations, as well as the displacements in the overburden that will create potential flow paths for the gas. In this paper, an approach to determine the conditional probabilities of depth-displacement, depth-flow percentage, depth-formation and depth-gas content of the formations was developed using bivariate normal distributions. The flow percentage, displacement and formation data as a function of distance from coal bed used in this study were obtained from a series of borehole experiments contracted by the former US Bureau of Mines as part of a research project. Each of these parameters was tested for normality and was modeled using bivariate normal distributions to determine all tail probabilities. In addition, the probability of coal bed gas content as a function of depth was determined using the same techniques. The tail probabilities at various depths were used to calculate conditional probabilities for each of the parameters. The conditional probabilities predicted for various values of the critical parameters can be used with the measurements of flow and methane percentage at gob gas ventholes to optimize their performance.

  20. A unitarized meson model including color Coulomb interaction

    International Nuclear Information System (INIS)

    Metzger, Kees.

    1990-01-01

    Ch. 1 gives a general introduction into the problem field of the thesis. It discusses in how far the internal structure of mesons is understood theoretically and which models exist. It discusses from a phenomenological point of view the problem of confinement indicates how quark models of mesons may provide insight in this phenomenon. In ch. 2 the formal theory of scattering in a system with confinement is given. It is shown how a coupled channel (CC) description and the work of other authors fit into this general framework. Explicit examples and arguments are given to support the CC treatment of such a system. In ch. 3 the full coupled-channel model as is employed in this thesis is presented. On the basis of arguments from the former chapters and the observed regularities in the experimental data, the choices underlying the model are supported. In this model confinement is described with a mass-dependent harmonic-oscillator potential and the presence of open (meson-meson) channels plays an essential role. In ch. 4 the unitarized model is applied to light scalar meson resonances. In this regime the contribution of the open channels is considerable. It is demonstrated that the model parameters as used for the description of the pseudo-scalar and vector mesons, unchanged can be used for the description of these mesons. Ch. 5 treats the color-Coulomb interaction. There the effect of the Coulomb interaction is studied in simple models without decay. The results of incorporating the color-Coulomb interaction into the full CC model are given in ch.6. Ch. 7 discusses the results of the previous chapters and the present status of the model. (author). 182 refs.; 16 figs.; 33 tabs

  1. Development of dose assessment code for release of tritium during normal operation of nuclear power plants

    International Nuclear Information System (INIS)

    Duran, J.; Malatova, I.

    2009-01-01

    A computer code PTM H TO has been developed to assess tritium doses to the general public. The code enables to simulate the behavior of tritium in the environment released into the atmosphere under normal operation of nuclear power plants. Code can calculate the doses for the three chemical and physical forms: tritium gas (HT), tritiated water vapor and water drops (HTO). The models in this code consist of the tritium transfer model including oxidation of HT to HTO and reemission of HTO from soil to the atmosphere, and the dose calculation model

  2. Single-Phase Bundle Flows Including Macroscopic Turbulence Model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Jun; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Yoon, Seok Jong; Cho, Hyoung Kyu [Seoul National University, Seoul (Korea, Republic of)

    2016-05-15

    To deal with various thermal hydraulic phenomena due to rapid change of fluid properties when an accident happens, securing mechanistic approaches as much as possible may reduce the uncertainty arising from improper applications of the experimental models. In this study, the turbulence mixing model, which is well defined in the subchannel analysis code such as VIPRE, COBRA, and MATRA by experiments, is replaced by a macroscopic k-e turbulence model, which represents the aspect of mathematical derivation. The performance of CUPID with macroscopic turbulence model is validated against several bundle experiments: CNEN 4x4 and PNL 7x7 rod bundle tests. In this study, the macroscopic k-e model has been validated for the application to subchannel analysis. It has been implemented in the CUPID code and validated against CNEN 4x4 and PNL 7x7 rod bundle tests. The results showed that the macroscopic k-e turbulence model can estimate the experiments properly.

  3. Quasi-particle lifetime broadening in normal-superconductor junctions with UPt3

    International Nuclear Information System (INIS)

    Wilde, T. de; Argonne National Lab., IL; Klapwijk, T.M.; Rijksuniversiteit Groningen; Rijksuniversiteit Groningen; Jansen, A.G.M.; Heil, J.; Wyder, P.

    1996-01-01

    For the Andreev-reflection process of quasi-particles at a normal-metal-superconductor interface the influence of lifetime broadening of the quasi-particles on the current-voltage characteristics of NS point contacts is analyzed along the lines of the Blonder-Tinkham-Klapwijk model. The anomalous Andreev-reflection spectra obtained for the heavy-fermion compound UPt 3 cannot be explained by lifetime broadening alone. Instead, an anisotropic superconducting order parameter has to be assumed which, if also lifetime broadening is included, leads to a fairly good agreement with the data. (orig.)

  4. Low-density lipoprotein concentration in the normal left coronary artery tree

    Directory of Open Access Journals (Sweden)

    Louridas George E

    2008-10-01

    Full Text Available Abstract Background The blood flow and transportation of molecules in the cardiovascular system plays a crucial role in the genesis and progression of atherosclerosis. This computational study elucidates the Low Density Lipoprotein (LDL site concentration in the entire normal human 3D tree of the LCA. Methods A 3D geometry model of the normal human LCA tree is constructed. Angiographic data used for geometry construction correspond to end-diastole. The resulted model includes the LMCA, LAD, LCxA and their main branches. The numerical simulation couples the flow equations with the transport equation applying realistic boundary conditions at the wall. Results High concentration of LDL values appears at bifurcation opposite to the flow dividers in the proximal regions of the Left Coronary Artery (LCA tree, where atherosclerosis frequently occurs. The area-averaged normalized luminal surface LDL concentrations over the entire LCA tree are, 1.0348, 1.054 and 1.23, for the low, median and high water infiltration velocities, respectively. For the high, median and low molecular diffusivities, the peak values of the normalized LDL luminal surface concentration at the LMCA bifurcation reach 1.065, 1.080 and 1.205, respectively. LCA tree walls are exposed to a cholesterolemic environment although the applied mass and flow conditions refer to normal human geometry and normal mass-flow conditions. Conclusion The relationship between WSS and luminal surface concentration of LDL indicates that LDL is elevated at locations where WSS is low. Concave sides of the LCA tree exhibit higher concentration of LDL than the convex sides. Decreased molecular diffusivity increases the LDL concentration. Increased water infiltration velocity increases the LDL concentration. The regional area of high luminal surface concentration is increased with increasing water infiltration velocity. Regions of high LDL luminal surface concentration do not necessarily co-locate to the

  5. "Differently normal" and "normally different": negotiations of female embodiment in women's accounts of 'atypical' sex development.

    Science.gov (United States)

    Guntram, Lisa

    2013-12-01

    During recent decades numerous feminist scholars have scrutinized the two-sex model and questioned its status in Western societies and medicine. Along the same line, increased attention has been paid to individuals' experiences of atypical sex development, also known as intersex or 'disorders of sex development' (DSD). Yet research on individuals' experiences of finding out about their atypical sex development in adolescence has been scarce. Against this backdrop, the present article analyses 23 in-depth interviews with women who in their teens found out about their atypical sex development. The interviews were conducted during 2009-2012 and the interviewees were all Swedish. Drawing on feminist research on female embodiment and social scientific studies on diagnosis, I examine how the women make sense of their bodies and situations. First, I aim to explore how the women construe normality as they negotiate female embodiment. Second, I aim to investigate how the divergent manners in which these negotiations are expressed can be further understood via the women's different access to a diagnosis. Through a thematic and interpretative analysis, I outline two negotiation strategies: the "differently normal" and the "normally different" strategy. In the former, the women present themselves as just slightly different from 'normal' women. In the latter, they stress that everyone is different in some manner and thereby claim normalcy. The analysis shows that access to diagnosis corresponds to the ways in which the women present themselves as "differently normal" and "normally different", thus shedding light on the complex role of diagnosis in their negotiations of female embodiment. It also reveals that the women make use of what they do have and how alignments with and work on norms interplay as normality is construed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    Science.gov (United States)

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  7. Normalizations of High Taylor Reynolds Number Power Spectra

    Science.gov (United States)

    Puga, Alejandro; Koster, Timothy; Larue, John C.

    2014-11-01

    The velocity power spectrum provides insight in how the turbulent kinetic energy is transferred from larger to smaller scales. Wind tunnel experiments are conducted where high intensity turbulence is generated by means of an active turbulence grid modeled after Makita's 1991 design (Makita, 1991) as implemented by Mydlarski and Warhaft (M&W, 1998). The goal of this study is to document the evolution of the scaling region and assess the relative collapse of several proposed normalizations over a range of Rλ from 185 to 997. As predicted by Kolmogorov (1963), an asymptotic approach of the slope (n) of the inertial subrange to - 5 / 3 with increasing Rλ is observed. There are three velocity power spectrum normalizations as presented by Kolmogorov (1963), Von Karman and Howarth (1938) and George (1992). Results show that the Von Karman and Howarth normalization does not collapse the velocity power spectrum as well as the Kolmogorov and George normalizations. The Kolmogorov normalization does a good job of collapsing the velocity power spectrum in the normalized high wavenumber range of 0 . 0002 University of California, Irvine Research Fund.

  8. A High-Rate, Single-Crystal Model including Phase Transformations, Plastic Slip, and Twinning

    Energy Technology Data Exchange (ETDEWEB)

    Addessio, Francis L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bronkhorst, Curt Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Bolme, Cynthia Anne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Explosive Science and Shock Physics Division; Brown, Donald William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Cerreta, Ellen Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lebensohn, Ricardo A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Lookman, Turab [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Luscher, Darby Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Mayeur, Jason Rhea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division; Morrow, Benjamin M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Materials Science and Technology Division; Rigg, Paulo A. [Washington State Univ., Pullman, WA (United States). Dept. of Physics. Inst. for Shock Physics

    2016-08-09

    An anisotropic, rate-­dependent, single-­crystal approach for modeling materials under the conditions of high strain rates and pressures is provided. The model includes the effects of large deformations, nonlinear elasticity, phase transformations, and plastic slip and twinning. It is envisioned that the model may be used to examine these coupled effects on the local deformation of materials that are subjected to ballistic impact or explosive loading. The model is formulated using a multiplicative decomposition of the deformation gradient. A plate impact experiment on a multi-­crystal sample of titanium was conducted. The particle velocities at the back surface of three crystal orientations relative to the direction of impact were measured. Molecular dynamics simulations were conducted to investigate the details of the high-­rate deformation and pursue issues related to the phase transformation for titanium. Simulations using the single crystal model were conducted and compared to the high-­rate experimental data for the impact loaded single crystals. The model was found to capture the features of the experiments.

  9. Predicting consonant recognition and confusions in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2017-01-01

    , Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102, 2892–2905]. The model was evaluated based on the extensive consonant perception data set provided by Zaar and Dau [(2015). J. Acoust. Soc. Am. 138, 1253–1267], which was obtained with normal-hearing listeners using 15 consonant-vowel combinations...... confusion groups. The large predictive power of the proposed model suggests that adaptive processes in the auditory preprocessing in combination with a cross-correlation based template-matching back end can account for some of the processes underlying consonant perception in normal-hearing listeners....... The proposed model may provide a valuable framework, e.g., for investigating the effects of hearing impairment and hearing-aid signal processing on phoneme recognition....

  10. The interblink interval in normal and dry eye subjects

    Directory of Open Access Journals (Sweden)

    Johnston PR

    2013-02-01

    Full Text Available Patrick R Johnston,1 John Rodriguez,1 Keith J Lane,1 George Ousler,1 Mark B Abelson1,21Ora, Inc, Andover, MA, USA; 2Schepens Eye Research Institute and Harvard Medical School, Boston, MA, USAPurpose: Our aim was to extend the concept of blink patterns from average interblink interval (IBI to other aspects of the distribution of IBI. We hypothesized that this more comprehensive approach would better discriminate between normal and dry eye subjects.Methods: Blinks were captured over 10 minutes for ten normal and ten dry eye subjects while viewing a standardized televised documentary. Fifty-five blinks were analyzed for each of the 20 subjects. Means, standard deviations, and autocorrelation coefficients were calculated utilizing a single random effects model fit to all data points and a diagnostic model was subsequently fit to predict probability of a subject having dry eye based on these parameters.Results: Mean IBI was 5.97 seconds for normal versus 2.56 seconds for dry eye subjects (ratio: 2.33, P = 0.004. IBI variability was 1.56 times higher in normal subjects (P < 0.001, and the autocorrelation was 1.79 times higher in normal subjects (P = 0.044. With regard to the diagnostic power of these measures, mean IBI was the best dry eye versus normal classifier using receiver operating characteristics (0.85 area under curve (AUC, followed by the standard deviation (0.75 AUC, and lastly, the autocorrelation (0.63 AUC. All three predictors combined had an AUC of 0.89. Based on this analysis, cutoffs of ≤3.05 seconds for median IBI, and ≤0.73 for the coefficient of variation were chosen to classify dry eye subjects.Conclusion: (1 IBI was significantly shorter for dry eye patients performing a visual task compared to normals; (2 there was a greater variability of interblink intervals in normal subjects; and (3 these parameters were useful as diagnostic predictors of dry eye disease. The results of this pilot study merit investigation of IBI

  11. The influence of the Al stabilizer layer thickness on the normal zone propagation velocity in high current superconductors

    CERN Document Server

    Shilon, I.; Langeslag, S.A.E.; Martins, L.P.; ten Kate, H.H.J.

    2015-06-19

    The stability of high-current superconductors is challenging in the design of superconducting magnets. When the stability requirements are fulfilled, the protection against a quench must still be considered. A main factor in the design of quench protection systems is the resistance growth rate in the magnet following a quench. The usual method for determining the resistance growth in impregnated coils is to calculate the longitudinal velocity with which the normal zone propagates in the conductor along the coil windings. Here, we present a 2D numerical model for predicting the normal zone propagation velocity in Al stabilized Rutherford NbTi cables with large cross section. By solving two coupled differential equations under adiabatic conditions, the model takes into account the thermal diffusion and the current redistribution process following a quench. Both the temperature and magnetic field dependencies of the superconductor and the metal cladding materials properties are included. Unlike common normal zon...

  12. SU-E-T-168: Evaluation of Normal Tissue Damage in Head and Neck Cancer Treatments

    International Nuclear Information System (INIS)

    Ai, H; Zhang, H

    2014-01-01

    Purpose: To evaluate normal tissue toxicity in patients with head and neck cancer by calculating average survival fraction (SF) and equivalent uniform dose (EUD) for normal tissue cells. Methods: 20 patients with head and neck cancer were included in this study. IMRT plans were generated using EclipseTM treatment planning system by dosimetrist following clinical radiotherapy treatment guidelines. The average SF for three different normal tissue cells of each concerned structure can be calculated from dose spectrum acquired from differential dose volume histogram (DVH) using linear quadratic model. The three types of normal tissues include radiosensitive, moderately radiosensitive and radio-resistant that represents 70%, 50% and 30% survival fractions, respectively, for a 2-Gy open field. Finally, EUDs for three types of normal tissue of each structure were calculated from average SF. Results: The EUDs of the brainstem, spinal cord, parotid glands, brachial plexus and etc were calculated. Our analysis indicated that the brainstem can absorb as much as 14.3% of prescription dose to the tumor if the cell line is radiosensitive. In addition, as much as 16.1% and 18.3% of prescription dose were absorbed by the brainstem for moderately radiosensitive and radio-resistant cells, respectively. For the spinal cord, the EUDs reached up to 27.6%, 35.0% and 42.9% of prescribed dose for the three types of radiosensitivities respectively. Three types of normal cells for parotid glands can get up to 65.6%, 71.2% and 78.4% of prescription dose, respectively. The maximum EUDs of brachial plexsus were calculated as 75.4%, 76.4% and 76.7% of prescription for three types of normal cell lines. Conclusion: The results indicated that EUD can be used to quantify and evaluate the radiation damage to surrounding normal tissues. Large variation of normal tissue EUDs may come from variation of target volumes and radiation beam orientations among the patients

  13. Complete normal ordering 1: Foundations

    Directory of Open Access Journals (Sweden)

    John Ellis

    2016-08-01

    Full Text Available We introduce a new prescription for quantising scalar field theories (in generic spacetime dimension and background perturbatively around a true minimum of the full quantum effective action, which is to ‘complete normal order’ the bare action of interest. When the true vacuum of the theory is located at zero field value, the key property of this prescription is the automatic cancellation, to any finite order in perturbation theory, of all tadpole and, more generally, all ‘cephalopod’ Feynman diagrams. The latter are connected diagrams that can be disconnected into two pieces by cutting one internal vertex, with either one or both pieces free from external lines. In addition, this procedure of ‘complete normal ordering’ (which is an extension of the standard field theory definition of normal ordering reduces by a substantial factor the number of Feynman diagrams to be calculated at any given loop order. We illustrate explicitly the complete normal ordering procedure and the cancellation of cephalopod diagrams in scalar field theories with non-derivative interactions, and by using a point splitting ‘trick’ we extend this result to theories with derivative interactions, such as those appearing as non-linear σ-models in the world-sheet formulation of string theory. We focus here on theories with trivial vacua, generalising the discussion to non-trivial vacua in a follow-up paper.

  14. Model-based analysis and control of a network of basal ganglia spiking neurons in the normal and Parkinsonian states

    Science.gov (United States)

    Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.

    2011-08-01

    Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed

  15. Comparing Epileptiform Behavior of Mesoscale Detailed Models and Population Models of Neocortex

    NARCIS (Netherlands)

    Visser, S.; Meijer, Hil Gaétan Ellart; Lee, Hyong C.; van Drongelen, Wim; van Putten, Michel Johannes Antonius Maria; van Gils, Stephanus A.

    2010-01-01

    Two models of the neocortex are developed to study normal and pathologic neuronal activity. One model contains a detailed description of a neocortical microcolumn represented by 656 neurons, including superficial and deep pyramidal cells, four types of inhibitory neurons, and realistic synaptic

  16. Objective Scaling of Sound Quality for Normal-Hearing and Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    ) Subjective sound quality ratings of clean and distorted speech and music signals, by normal-hearing and hearing-impaired listeners, to provide reference data, 2) An auditory model of the ear, including the effects of hearing loss, based on existing psychoacoustic knowledge, coupled to 3) An artificial neural......A new method for the objective estimation of sound quality for both normal-hearing and hearing-impaired listeners has been presented: OSSQAR (Objective Scaling of Sound Quality and Reproduction). OSSQAR is based on three main parts, which have been carried out and documented separately: 1...... network, which was trained to predict the sound quality ratings. OSSQAR predicts the perceived sound quality on two independent perceptual rating scales: Clearness and Sharpness. These two scales were shown to be the most relevant for assessment of sound quality, and they were interpreted the same way...

  17. Oxygen delivery in irradiated normal tissue

    Energy Technology Data Exchange (ETDEWEB)

    Kiani, M.F.; Ansari, R. [Univ. of Tennessee Health Science Center, Memphis, TN (United States). School of Biomedical Engineering; Gaber, M.W. [St. Jude Children' s Research Hospital, Memphis, TN (United States)

    2003-03-01

    Ionizing radiation exposure significantly alters the structure and function of microvascular networks, which regulate delivery of oxygen to tissue. In this study we use a hamster cremaster muscle model to study changes in microvascular network parameters and use a mathematical model to study the effects of these observed structural and microhemodynamic changes in microvascular networks on oxygen delivery to the tissue. Our experimental observations indicate that in microvascular networks while some parameters are significantly affected by irradiation (e.g. red blood cell (RBC) transit time), others remain at the control level (e.g. RBC path length) up to 180 days post-irradiation. The results from our mathematical model indicate that tissue oxygenation patterns are significantly different in irradiated normal tissue as compared to age-matched controls and the differences are apparent as early as 3 days post irradiation. However, oxygen delivery to irradiated tissue was not found to be significantly different from age matched controls at any time between 7 days to 6 months post-irradiation. These findings indicate that microvascular late effects in irradiated normal tissue may be due to factors other than compromised tissue oxygenation. (author)

  18. Topological resilience in non-normal networked systems

    Science.gov (United States)

    Asllani, Malbor; Carletti, Timoteo

    2018-04-01

    The network of interactions in complex systems strongly influences their resilience and the system capability to resist external perturbations or structural damages and to promptly recover thereafter. The phenomenon manifests itself in different domains, e.g., parasitic species invasion in ecosystems or cascade failures in human-made networks. Understanding the topological features of the networks that affect the resilience phenomenon remains a challenging goal for the design of robust complex systems. We hereby introduce the concept of non-normal networks, namely networks whose adjacency matrices are non-normal, propose a generating model, and show that such a feature can drastically change the global dynamics through an amplification of the system response to exogenous disturbances and eventually impact the system resilience. This early stage transient period can induce the formation of inhomogeneous patterns, even in systems involving a single diffusing agent, providing thus a new kind of dynamical instability complementary to the Turing one. We provide, first, an illustrative application of this result to ecology by proposing a mechanism to mute the Allee effect and, second, we propose a model of virus spreading in a population of commuters moving using a non-normal transport network, the London Tube.

  19. Including model uncertainty in risk-informed decision making

    International Nuclear Information System (INIS)

    Reinert, Joshua M.; Apostolakis, George E.

    2006-01-01

    Model uncertainties can have a significant impact on decisions regarding licensing basis changes. We present a methodology to identify basic events in the risk assessment that have the potential to change the decision and are known to have significant model uncertainties. Because we work with basic event probabilities, this methodology is not appropriate for analyzing uncertainties that cause a structural change to the model, such as success criteria. We use the risk achievement worth (RAW) importance measure with respect to both the core damage frequency (CDF) and the change in core damage frequency (ΔCDF) to identify potentially important basic events. We cross-check these with generically important model uncertainties. Then, sensitivity analysis is performed on the basic event probabilities, which are used as a proxy for the model parameters, to determine how much error in these probabilities would need to be present in order to impact the decision. A previously submitted licensing basis change is used as a case study. Analysis using the SAPHIRE program identifies 20 basic events as important, four of which have model uncertainties that have been identified in the literature as generally important. The decision is fairly insensitive to uncertainties in these basic events. In three of these cases, one would need to show that model uncertainties would lead to basic event probabilities that would be between two and four orders of magnitude larger than modeled in the risk assessment before they would become important to the decision. More detailed analysis would be required to determine whether these higher probabilities are reasonable. Methods to perform this analysis from the literature are reviewed and an example is demonstrated using the case study

  20. Hypervascular liver lesions in radiologically normal liver

    Energy Technology Data Exchange (ETDEWEB)

    Amico, Enio Campos; Alves, Jose Roberto; Souza, Dyego Leandro Bezerra de; Salviano, Fellipe Alexandre Macena; Joao, Samir Assi; Liguori, Adriano de Araujo Lima, E-mail: ecamic@uol.com.br [Hospital Universitario Onofre Lopes (HUOL/UFRN), Natal, RN (Brazil). Clinica Gastrocentro e Ambulatorios de Cirurgia do Aparelho Digestivo e de Cirurgia Hepatobiliopancreatica

    2017-09-01

    Background: The hypervascular liver lesions represent a diagnostic challenge. Aim: To identify risk factors for cancer in patients with non-hemangiomatous hypervascular hepatic lesions in radiologically normal liver. Method: This prospective study included patients with hypervascular liver lesions in radiologically normal liver. The diagnosis was made by biopsy or was presumed on the basis of radiologic stability in follow-up period of one year. Cirrhosis or patients with typical imaging characteristics of haemangioma were excluded. Results: Eighty eight patients were included. The average age was 42.4. The lesions were unique and were between 2-5 cm in size in most cases. Liver biopsy was performed in approximately 1/3 of cases. The lesions were benign or most likely benign in 81.8%, while cancer was diagnosed in 12.5% of cases. Univariate analysis showed that age >45 years (p< 0.001), personal history of cancer (p=0.020), presence of >3 nodules (p=0.003) and elevated alkaline phosphatase (p=0.013) were significant risk factors for cancer. Conclusion: It is safe to observe hypervascular liver lesions in normal liver in patients up to 45 years, normal alanine amino transaminase, up to three nodules and no personal history of cancer. Lesion biopsies are safe in patients with atypical lesions and define the treatment to be established for most of these patients. (author)

  1. Brightness-normalized Partial Least Squares Regression for hyperspectral data

    International Nuclear Information System (INIS)

    Feilhauer, Hannes; Asner, Gregory P.; Martin, Roberta E.; Schmidtlein, Sebastian

    2010-01-01

    Developed in the field of chemometrics, Partial Least Squares Regression (PLSR) has become an established technique in vegetation remote sensing. PLSR was primarily designed for laboratory analysis of prepared material samples. Under field conditions in vegetation remote sensing, the performance of the technique may be negatively affected by differences in brightness due to amount and orientation of plant tissues in canopies or the observing conditions. To minimize these effects, we introduced brightness normalization to the PLSR approach and tested whether this modification improves the performance under changing canopy and observing conditions. This test was carried out using high-fidelity spectral data (400-2510 nm) to model observed leaf chemistry. The spectral data was combined with a canopy radiative transfer model to simulate effects of varying canopy structure and viewing geometry. Brightness normalization enhanced the performance of PLSR by dampening the effects of canopy shade, thus providing a significant improvement in predictions of leaf chemistry (up to 3.6% additional explained variance in validation) compared to conventional PLSR. Little improvement was made on effects due to variable leaf area index, while minor improvement (mostly not significant) was observed for effects of variable viewing geometry. In general, brightness normalization increased the stability of model fits and regression coefficients for all canopy scenarios. Brightness-normalized PLSR is thus a promising approach for application on airborne and space-based imaging spectrometer data.

  2. Children and adolescents' internal models of food-sharing behavior include complex evaluations of contextual factors.

    Science.gov (United States)

    Markovits, Henry; Benenson, Joyce F; Kramer, Donald L

    2003-01-01

    This study examined internal representations of food sharing in 589 children and adolescents (8-19 years of age). Questionnaires, depicting a variety of contexts in which one person was asked to share a resource with another, were used to examine participants' expectations of food-sharing behavior. Factors that were varied included the value of the resource, the relation between the two depicted actors, the quality of this relation, and gender. Results indicate that internal models of food-sharing behavior showed systematic patterns of variation, demonstrating that individuals have complex contextually based internal models at all ages, including the youngest. Examination of developmental changes in use of individual patterns is consistent with the idea that internal models reflect age-specific patterns of interactions while undergoing a process of progressive consolidation.

  3. Use of critical pathway models and log-normal frequency distributions for siting nuclear facilities

    International Nuclear Information System (INIS)

    Waite, D.A.; Denham, D.H.

    1975-01-01

    The advantages and disadvantages of potential sites for nuclear facilities are evaluated through the use of environmental pathway and log-normal distribution analysis. Environmental considerations of nuclear facility siting are necessarily geared to the identification of media believed to be sifnificant in terms of dose to man or to be potential centres for long-term accumulation of contaminants. To aid in meeting the scope and purpose of this identification, an exposure pathway diagram must be developed. This type of diagram helps to locate pertinent environmental media, points of expected long-term contaminant accumulation, and points of population/contaminant interface for both radioactive and non-radioactive contaminants. Confirmation of facility siting conclusions drawn from pathway considerations must usually be derived from an investigatory environmental surveillance programme. Battelle's experience with environmental surveillance data interpretation using log-normal techniques indicates that this distribution has much to offer in the planning, execution and analysis phases of such a programme. How these basic principles apply to the actual siting of a nuclear facility is demonstrated for a centrifuge-type uranium enrichment facility as an example. A model facility is examined to the extent of available data in terms of potential contaminants and facility general environmental needs. A critical exposure pathway diagram is developed to the point of prescribing the characteristics of an optimum site for such a facility. Possible necessary deviations from climatic constraints are reviewed and reconciled with conclusions drawn from the exposure pathway analysis. Details of log-normal distribution analysis techniques are presented, with examples of environmental surveillance data to illustrate data manipulation techniques and interpretation procedures as they affect the investigatory environmental surveillance programme. Appropriate consideration is given these

  4. A normalized model for the half-bridge series resonant converter

    Science.gov (United States)

    King, R.; Stuart, T. A.

    1981-01-01

    Closed-form steady-state equations are derived for the half-bridge series resonant converter with a rectified (dc) load. Normalized curves for various currents and voltages are then plotted as a function of the circuit parameters. Experimental results based on a 10-kHz converter are presented for comparison with the calculations.

  5. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  6. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  7. Identification of a developmental gene expression signature, including HOX genes, for the normal human colonic crypt stem cell niche: overexpression of the signature parallels stem cell overpopulation during colon tumorigenesis.

    Science.gov (United States)

    Bhatlekar, Seema; Addya, Sankar; Salunek, Moreh; Orr, Christopher R; Surrey, Saul; McKenzie, Steven; Fields, Jeremy Z; Boman, Bruce M

    2014-01-15

    Our goal was to identify a unique gene expression signature for human colonic stem cells (SCs). Accordingly, we determined the gene expression pattern for a known SC-enriched region--the crypt bottom. Colonic crypts and isolated crypt subsections (top, middle, and bottom) were purified from fresh, normal, human, surgical specimens. We then used an innovative strategy that used two-color microarrays (∼18,500 genes) to compare gene expression in the crypt bottom with expression in the other crypt subsections (middle or top). Array results were validated by PCR and immunostaining. About 25% of genes analyzed were expressed in crypts: 88 preferentially in the bottom, 68 in the middle, and 131 in the top. Among genes upregulated in the bottom, ∼30% were classified as growth and/or developmental genes including several in the PI3 kinase pathway, a six-transmembrane protein STAMP1, and two homeobox (HOXA4, HOXD10) genes. qPCR and immunostaining validated that HOXA4 and HOXD10 are selectively expressed in the normal crypt bottom and are overexpressed in colon carcinomas (CRCs). Immunostaining showed that HOXA4 and HOXD10 are co-expressed with the SC markers CD166 and ALDH1 in cells at the normal crypt bottom, and the number of these co-expressing cells is increased in CRCs. Thus, our findings show that these two HOX genes are selectively expressed in colonic SCs and that HOX overexpression in CRCs parallels the SC overpopulation that occurs during CRC development. Our study suggests that developmental genes play key roles in the maintenance of normal SCs and crypt renewal, and contribute to the SC overpopulation that drives colon tumorigenesis.

  8. Normal stress databases in myocardial perfusion scintigraphy – how many subjects do you need?

    DEFF Research Database (Denmark)

    Trägårdh, Elin; Sjöstrand, Karl; Edenbrandt, Lars

    2012-01-01

    ) for male, NC for female, attenuation‐corrected images (AC) for male and AC for female subjects. 126 male and 205 female subjects were included. The normal database was created by alternatingly computing the mean of all normal subjects and normalizing the subjects with respect to this mean, until...... convergence. Coefficients of variation (CV) were created for increasing number of included patients in the four different normal stress databases. Normal stress databases with ...Commercial normal stress databases in myocardial perfusion scintigraphy (MPS) commonly consist of 30–40 individuals. The aim of the study was to determine how many subjects are needed. Four normal stress databases were developed using patients who underwent 99mTc MPS: non‐corrected images (NC...

  9. Normalization as a canonical neural computation

    Science.gov (United States)

    Carandini, Matteo; Heeger, David J.

    2012-01-01

    There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672

  10. A comparison of foot kinematics in people with normal- and flat-arched feet using the Oxford Foot Model.

    Science.gov (United States)

    Levinger, Pazit; Murley, George S; Barton, Christian J; Cotchett, Matthew P; McSweeney, Simone R; Menz, Hylton B

    2010-10-01

    Foot posture is thought to influence predisposition to overuse injuries of the lower limb. Although the mechanisms underlying this proposed relationship are unclear, it is thought that altered foot kinematics may play a role. Therefore, this study was designed to investigate differences in foot motion between people with normal- and flat-arched feet using the Oxford Foot Model (OFM). Foot posture in 19 participants was documented as normal-arched (n=10) or flat-arched (n=9) using a foot screening protocol incorporating measurements from weightbearing antero-posterior and lateral foot radiographs. Differences between the groups in triplanar motion of the tibia, rearfoot and forefoot during walking were evaluated using a three-dimensional motion analysis system incorporating a multi-segment foot model (OFM). Participants with flat-arched feet demonstrated greater peak forefoot plantar-flexion (-13.7° ± 5.6° vs -6.5° ± 3.7°; p=0.004), forefoot abduction (-12.9° ± 6.9° vs -1.8° ± 6.3°; p=0.002), and rearfoot internal rotation (10.6° ± 7.5° vs -0.2°± 9.9°; p=0.018) compared to those with normal-arched feet. Additionally, participants with flat-arched feet demonstrated decreased peak forefoot adduction (-7.0° ± 9.2° vs 5.6° ± 7.3°; p=0.004) and a trend towards increased rearfoot eversion (-5.8° ± 4.4° vs -2.5° ± 2.6°; p=0.06). These findings support the notion that flat-arched feet have altered motion associated with greater pronation during gait; factors that may increase the risk of overuse injury. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Facial-Attractiveness Choices Are Predicted by Divisive Normalization.

    Science.gov (United States)

    Furl, Nicholas

    2016-10-01

    Do people appear more attractive or less attractive depending on the company they keep? A divisive-normalization account-in which representation of stimulus intensity is normalized (divided) by concurrent stimulus intensities-predicts that choice preferences among options increase with the range of option values. In the first experiment reported here, I manipulated the range of attractiveness of the faces presented on each trial by varying the attractiveness of an undesirable distractor face that was presented simultaneously with two attractive targets, and participants were asked to choose the most attractive face. I used normalization models to predict the context dependence of preferences regarding facial attractiveness. The more unattractive the distractor, the more one of the targets was preferred over the other target, which suggests that divisive normalization (a potential canonical computation in the brain) influences social evaluations. I obtained the same result when I manipulated faces' averageness and participants chose the most average face. This finding suggests that divisive normalization is not restricted to value-based decisions (e.g., attractiveness). This new application to social evaluation of normalization, a classic theory, opens possibilities for predicting social decisions in naturalistic contexts such as advertising or dating.

  12. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    Science.gov (United States)

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    International Nuclear Information System (INIS)

    Coolen, F.P.A.

    1997-01-01

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  14. Analysis of electronic models for solar cells including energy resolved defect densities

    Energy Technology Data Exchange (ETDEWEB)

    Glitzky, Annegret

    2010-07-01

    We introduce an electronic model for solar cells including energy resolved defect densities. The resulting drift-diffusion model corresponds to a generalized van Roosbroeck system with additional source terms coupled with ODEs containing space and energy as parameters for all defect densities. The system has to be considered in heterostructures and with mixed boundary conditions from device simulation. We give a weak formulation of the problem. If the boundary data and the sources are compatible with thermodynamic equilibrium the free energy along solutions decays monotonously. In other cases it may be increasing, but we estimate its growth. We establish boundedness and uniqueness results and prove the existence of a weak solution. This is done by considering a regularized problem, showing its solvability and the boundedness of its solutions independent of the regularization level. (orig.)

  15. Radiation-induced normal tissue damage: implications for radiotherapy

    International Nuclear Information System (INIS)

    Prasanna, Pataje G.

    2014-01-01

    Radiotherapy is an important treatment modality for many malignancies, either alone or as a part of combined modality treatment. However, despite technological advances in physical treatment delivery, patients suffer adverse effects from radiation therapy due to normal tissue damage. These side effects may be acute, occurring during or within weeks after therapy, or intermediate to late, occurring months to years after therapy. Minimizing normal tissue damage from radiotherapy will allow enhancement of tumor killing and improve tumor control and patients quality of life. Understanding mechanisms through which radiation toxicity develops in normal tissue will facilitate the development of next generation radiation effect modulators. Translation of these agents to the clinic will also require an understanding of the impact of these protectors and mitigators on tumor radiation response. In addition, normal tissues vary in radiobiologically important ways, including organ sensitivity to radiation, cellular turnover rate, and differences in mechanisms of injury manifestation and damage response. Therefore, successful development of radiation modulators may require multiple approaches to address organ/site-specific needs. These may include treatments that modify cellular damage and death processes, inflammation, alteration of normal flora, wound healing, tissue regeneration and others, specifically to counter cancer site-specific adverse effects. Further, an understanding of mechanisms of normal tissue damage will allow development of predictive biomarkers; however harmonization of such assays is critical. This is a necessary step towards patient-specific treatment customization. Examples of important adverse effects of radiotherapy either alone or in conjunction with chemotherapy, and important limitations in the current approaches of using radioprotectors for improving therapeutic outcome will be highlighted. (author)

  16. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  17. Radiological Impacts Assessment during Normal Decommissioning Operation for EU-APR

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Do Hyun; Lee, Keun Sung [KHNP CRI, Daejeon (Korea, Republic of); Lee, ChongHui [KEPCO Engineering and Construction, Gimcheon (Korea, Republic of)

    2016-10-15

    In this paper, radiological impacts on human beings during normal execution of the decommissioning operations from the current standard design of EU-APR which has been modified and improved from its original design of APR1400 to comply with EUR, are evaluated. Decommissioning is the final phase in the life cycle of a nuclear installation, covering all activities from shutdown and removal of fissile material to environmental restoration of the site. According to article 5.4 specified in chapter 2.20 of European Utility Requirements (EUR), all relevant radiological impacts on human being should be considered during the environmental assessment of decommissioning, including external exposure from direct radiation of plant and other radiation sources, and internal exposure due to inhalation and ingestion. In this paper, radiological impacts on human beings during normal circumstances of the decommissioning operation were evaluated from the current standard design of EU-APR based on the simple transport model and practical generic methodology for assessing the radiological impact provided by IAEA. The results of dose assessment fulfilled the dose limit for all scenarios.

  18. Including capabilities of local actors in regional economic development: Empirical results of local seaweed industries in Sulawesi

    Directory of Open Access Journals (Sweden)

    Mark T.J. Vredegoor

    2013-11-01

    Full Text Available Stimson, et al. (2009 developed one of the most relevant and well known model for Regional Economic Development. This model covers the most important factors related to economic development question. However, this model excludes the social components of development. Local community should be included in terms of the development of a region. This paper introduced to the Stimson model “Skills” and “Knowledge” at the individual level for local actors indicating the capabilities at the individual level and introduced “Human Coordination” for the capabilities at the collective level. In our empirical research we looked at the Indonesian seaweed market with a specific focus on the region of Baubau. This region was chosen because there are hardly any economic developments. Furthermore this study focuses on the poorer community who are trying to improve their situation by the cultivation of Seaweed. Eighteen local informants was interviewed besides additional interviews of informants from educational and governmental institutions in the cities of Jakarta, Bandung and Yogyakarta. The informants selected had a direct or indirect relationship with the region of Baubau. With the support of the empirical data from this region we can confirm that it is worthwhile to include the local community in the model for regional economic development.  The newly added variables: at the individual level; Skills and Knowledge and at the level of the collective: Human Coordination was supported by the empirical material. It is an indication that including the new variables can give regional economic an extra dimension.  In this way we think that it becomes more explicit that “endogenous” means that the people, or variables closely related to them, should be more explicitly included in models trying to capture Regional Economic Development or rephrased as Local Economic Development Keywords:Regional and endogenous development; Fisheries and seaweed

  19. The triangular density to approximate the normal density: decision rules-of-thumb

    International Nuclear Information System (INIS)

    Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.

    2003-01-01

    In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications

  20. Objective classification of latent behavioral states in bio-logging data using multivariate-normal hidden Markov models.

    Science.gov (United States)

    Phillips, Joe Scutt; Patterson, Toby A; Leroy, Bruno; Pilling, Graham M; Nicol, Simon J

    2015-07-01

    Analysis of complex time-series data from ecological system study requires quantitative tools for objective description and classification. These tools must take into account largely ignored problems of bias in manual classification, autocorrelation, and noise. Here we describe a method using existing estimation techniques for multivariate-normal hidden Markov models (HMMs) to develop such a classification. We use high-resolution behavioral data from bio-loggers attached to free-roaming pelagic tuna as an example. Observed patterns are assumed to be generated by an unseen Markov process that switches between several multivariate-normal distributions. Our approach is assessed in two parts. The first uses simulation experiments, from which the ability of the HMM to estimate known parameter values is examined using artificial time series of data consistent with hypotheses about pelagic predator foraging ecology. The second is the application to time series of continuous vertical movement data from yellowfin and bigeye tuna taken from tuna tagging experiments. These data were compressed into summary metrics capturing the variation of patterns in diving behavior and formed into a multivariate time series used to estimate a HMM. Each observation was associated with covariate information incorporating the effect of day and night on behavioral switching. Known parameter values were well recovered by the HMMs in our simulation experiments, resulting in mean correct classification rates of 90-97%, although some variance-covariance parameters were estimated less accurately. HMMs with two distinct behavioral states were selected for every time series of real tuna data, predicting a shallow warm state, which was similar across all individuals, and a deep colder state, which was more variable. Marked diurnal behavioral switching was predicted, consistent with many previous empirical studies on tuna. HMMs provide easily interpretable models for the objective classification of

  1. Sphalerons, deformed sphalerons and normal modes

    International Nuclear Information System (INIS)

    Brihaye, Y.; Kunz, J.; Oldenburg Univ.

    1992-01-01

    Topological arguments suggest that tha Weinberg-Salam model posses unstable solutions, sphalerons, representing the top of energy barriers between inequivalent vacua of the gauge theory. In the limit of vanishing Weinberg angle, such unstable solutions are known: the sphaleron of Klinkhamer and Manton and at large values of the Higgs mass in addition the deformed sphalerons. Here a systematic study of the discrete normal modes about these sphalerons for the full range Higgs mass is presented. The emergence of deformed sphalerons at critical values of the Higgs mass is seem to be related to the crossing of zero of the eigenvalue of the particular normal modes about the sphaleron. 6 figs., 1 tab., 19 refs. (author)

  2. Normal modes of vibration in nickel

    Energy Technology Data Exchange (ETDEWEB)

    Birgeneau, R J [Yale Univ., New Haven, Connecticut (United States); Cordes, J [Cambridge Univ., Cambridge (United Kingdom); Dolling, G; Woods, A D B

    1964-07-01

    The frequency-wave-vector dispersion relation, {nu}(q), for the normal vibrations of a nickel single crystal at 296{sup o}K has been measured for the [{zeta}00], [{zeta}00], [{zeta}{zeta}{zeta}], and [0{zeta}1] symmetric directions using inelastic neutron scattering. The results can be described in terms of the Born-von Karman theory of lattice dynamics with interactions out to fourth-nearest neighbors. The shapes of the dispersion curves are very similar to those of copper, the normal mode frequencies in nickel being about 1.24 times the corresponding frequencies in copper. The fourth-neighbor model was used to calculate the frequency distribution function g({nu}) and related thermodynamic properties. (author)

  3. Atlas-based head modeling and spatial normalization for high-density diffuse optical tomography: in vivo validation against fMRI.

    Science.gov (United States)

    Ferradal, Silvina L; Eggebrecht, Adam T; Hassanpour, Mahlega; Snyder, Abraham Z; Culver, Joseph P

    2014-01-15

    Diffuse optical imaging (DOI) is increasingly becoming a valuable neuroimaging tool when fMRI is precluded. Recent developments in high-density diffuse optical tomography (HD-DOT) overcome previous limitations of sparse DOI systems, providing improved image quality and brain specificity. These improvements in instrumentation prompt the need for advancements in both i) realistic forward light modeling for accurate HD-DOT image reconstruction, and ii) spatial normalization for voxel-wise comparisons across subjects. Individualized forward light models derived from subject-specific anatomical images provide the optimal inverse solutions, but such modeling may not be feasible in all situations. In the absence of subject-specific anatomical images, atlas-based head models registered to the subject's head using cranial fiducials provide an alternative solution. In addition, a standard atlas is attractive because it defines a common coordinate space in which to compare results across subjects. The question therefore arises as to whether atlas-based forward light modeling ensures adequate HD-DOT image quality at the individual and group level. Herein, we demonstrate the feasibility of using atlas-based forward light modeling and spatial normalization methods. Both techniques are validated using subject-matched HD-DOT and fMRI data sets for visual evoked responses measured in five healthy adult subjects. HD-DOT reconstructions obtained with the registered atlas anatomy (i.e. atlas DOT) had an average localization error of 2.7mm relative to reconstructions obtained with the subject-specific anatomical images (i.e. subject-MRI DOT), and 6.6mm relative to fMRI data. At the group level, the localization error of atlas DOT reconstruction was 4.2mm relative to subject-MRI DOT reconstruction, and 6.1mm relative to fMRI. These results show that atlas-based image reconstruction provides a viable approach to individual head modeling for HD-DOT when anatomical imaging is not available

  4. XYLITOL IMPROVES ANTI-OXIDATIVE DEFENSE SYSTEM IN SERUM, LIVER, HEART, KIDNEY AND PANCREAS OF NORMAL AND TYPE 2 DIABETES MODEL OF RATS.

    Science.gov (United States)

    Chukwuma, Chika Ifeanyi; Islam, Shahidul

    2017-05-01

    The present study investigated the anti-oxidative effects of xylitol both in vitro and in vivo in normal and type 2 diabetes (T2D) rat model. Free radical scavenging and ferric reducing potentials of different concentrations of xylitol were investigated in vitro. For in vivo study, six weeks old male Sprague-Dawley rats were divided into four groups, namely: Normal Control (NC), Diabetic Control (DBC), Normal Xylitol (NXYL) and Diabetic Xylitol (DXYL). T2D was induced in the DBC and DXYL groups. After the confirmation of diabetes, a 10% xylitol solution was supplied instead of drinking water to NXYL and DXYL, while normal drinking water was supplied to NC and DBC ad libitum. After five weeks intervention period, the animals were sacri- ficed and thiobarbituric acid reactive substances (TBARS) and reduced glutathione (GSH) concentrations as well as superoxide dismutase, catalase glutathione reductase and glutathione peroxidase activities were determined in the liver, heart, kidney, pancreatic tissues and serum samples. Xylitol exhibited significant (p foods and food products.

  5. Mathematical Model of Two Phase Flow in Natural Draft Wet-Cooling Tower Including Flue Gas Injection

    Directory of Open Access Journals (Sweden)

    Hyhlík Tomáš

    2016-01-01

    Full Text Available The previously developed model of natural draft wet-cooling tower flow, heat and mass transfer is extended to be able to take into account the flow of supersaturated moist air. The two phase flow model is based on void fraction of gas phase which is included in the governing equations. Homogeneous equilibrium model, where the two phases are well mixed and have the same velocity, is used. The effect of flue gas injection is included into the developed mathematical model by using source terms in governing equations and by using momentum flux coefficient and kinetic energy flux coefficient. Heat and mass transfer in the fill zone is described by the system of ordinary differential equations, where the mass transfer is represented by measured fill Merkel number and heat transfer is calculated using prescribed Lewis factor.

  6. Normalized inverse characterization of sound absorbing rigid porous media.

    Science.gov (United States)

    Zieliński, Tomasz G

    2015-06-01

    This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.

  7. Normal computed tomographic anatomy of the cisterns and cranial nerves

    International Nuclear Information System (INIS)

    Manelfe, C.; Bonafe, A.

    1980-01-01

    This study, based on the normal CT anatomy of the cisterns investigated with Metrizamide, aims at attempting to find out with accuracy which plane of section is the most suitable for the investigation of each group of cisterns (posterior fossa, mesencephalon, suprasellar). Moreover we felt it necessary to include our study the normal appearance of the cranial nerves as their normal CT anatymy - optic nerves expected - is not well known yet. (orig./AJ) [de

  8. SwarmDock and the Use of Normal Modes in Protein-Protein Docking

    Directory of Open Access Journals (Sweden)

    Paul A. Bates

    2010-09-01

    Full Text Available Here is presented an investigation of the use of normal modes in protein-protein docking, both in theory and in practice. Upper limits of the ability of normal modes to capture the unbound to bound conformational change are calculated on a large test set, with particular focus on the binding interface, the subset of residues from which the binding energy is calculated. Further, the SwarmDock algorithm is presented, to demonstrate that the modelling of conformational change as a linear combination of normal modes is an effective method of modelling flexibility in protein-protein docking.

  9. RF Breakdown in Normal Conducting Single-Cell Structures

    International Nuclear Information System (INIS)

    Dolgashev, V.A.; Nantista, C.D.; Tantawi, S.G.; Higashi, Y.; Higo, T.

    2006-01-01

    Operating accelerating gradient in normal conducting accelerating structures is often limited by rf breakdown. The limit depends on multiple parameters, including input rf power, rf circuit, cavity shape and material. Experimental and theoretical study of the effects of these parameters on the breakdown limit in full scale structures is difficult and costly. We use 11.4 GHz single-cell traveling wave and standing wave accelerating structures for experiments and modeling of rf breakdown behavior. These test structures are designed so that the electromagnetic fields in one cell mimic the fields in prototype multicell structures for the X-band linear collider. Fields elsewhere in the test structures are significantly lower than that of the single cell. The setup uses matched mode converters that launch the circular TM 01 mode into short test structures. The test structures are connected to the mode launchers with vacuum rf flanges. This setup allows economic testing of different cell geometries, cell materials and preparation techniques with short turn-around time. Simple 2D geometry of the test structures simplifies modeling of the breakdown currents and their thermal effects

  10. A coupled oscillator model describes normal and strange zooplankton swimming behaviour

    NARCIS (Netherlands)

    Ringelberg, J.; Lingeman, R.

    2003-01-01

    "Normal" swimming in marine and freshwater zooplankton is often intermittent with active upward and more passive downward displacements. In the freshwater cladoceran Daphnia, the pattern is sometimes regular enough to demonstrate the presence of a rhythm. Abnormal swimming patterns were also

  11. Subtle alterations in memory systems and normal visual attention in the GAERS model of absence epilepsy.

    Science.gov (United States)

    Marques-Carneiro, J E; Faure, J-B; Barbelivien, A; Nehlig, A; Cassel, J-C

    2016-03-01

    Even if considered benign, absence epilepsy may alter memory and attention, sometimes subtly. Very little is known on behavior and cognitive functions in the Genetic Absence Epilepsy Rats from Strasbourg (GAERS) model of absence epilepsy. We focused on different memory systems and sustained visual attention, using Non Epileptic Controls (NECs) and Wistars as controls. A battery of cognitive/behavioral tests was used. The functionality of reference, working, and procedural memory was assessed in the Morris water maze (MWM), 8-arm radial maze, T-maze and/or double-H maze. Sustained visual attention was evaluated in the 5-choice serial reaction time task. In the MWM, GAERS showed delayed learning and less efficient working memory. In the 8-arm radial maze and T-maze tests, working memory performance was normal in GAERS, although most GAERS preferred an egocentric strategy (based on proprioceptive/kinesthetic information) to solve the task, but could efficiently shift to an allocentric strategy (based on spatial cues) after protocol alteration. Procedural memory and visual attention were mostly unimpaired. Absence epilepsy has been associated with some learning problems in children. In GAERS, the differences in water maze performance (slower learning of the reference memory task and weak impairment of working memory) and in radial arm maze strategies suggest that cognitive alterations may be subtle, task-specific, and that normal performance can be a matter of strategy adaptation. Altogether, these results strengthen the "face validity" of the GAERS model: in humans with absence epilepsy, cognitive alterations are not easily detectable, which is compatible with subtle deficits. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Subsequent development of the normal temperature fusion reaction. Joon kakuyugo sonogo no shinten

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, T. (Hokkaido University, Sapporo (Japan). Faculty of Engineering)

    1991-04-24

    This paper reports on a NATTOH model made public in May 1989 by T. Matsumoto who took notice of abnormality of the normal temperature fusion reaction. The NATTO model is based on a chain reaction by hydrogen with a hydrogen-catalyzed fusion reaction which is the normal temperature fusion reaction as an elementary process. If a high temperature fusion reaction is a small-size simulation of the fusion reaction rising on the surface of the sparkling star like the sun, the normal temperature fusion reaction can be a small-size simulation of the phenomena in the last years of the star in the far distance of the space. This gives reality to the normal temperature fusion reaction. The reaction mechanism of the normal temperature fusion reaction is almost being clarified by a NATTOH model. There remain problems on a possibility of generation of unknown radioactive rays and identification of radioactive wastes, but it seems that a prospect of commercialization can be talked about now. As for the utilization as energy, sea water may be used as it is. 10 ref., 5 figs.

  13. Syntactic error modeling and scoring normalization in speech recognition: Error modeling and scoring normalization in the speech recognition task for adult literacy training

    Science.gov (United States)

    Olorenshaw, Lex; Trawick, David

    1991-01-01

    The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.

  14. Physics of collisionless scrape-off-layer plasma during normal and off-normal Tokamak operating conditions

    International Nuclear Information System (INIS)

    Hassanein, A.; Konkashbaev, I.

    1999-01-01

    The structure of a collisionless scrape-off-layer (SOL) plasma in tokamak reactors is being studied to define the electron distribution function and the corresponding sheath potential between the divertor plate and the edge plasma. The collisionless model is shown to be valid during the thermal phase of a plasma disruption, as well as during the newly desired low-recycling normal phase of operation with low-density, high-temperature, edge plasma conditions. An analytical solution is developed by solving the Fokker-Planck equation for electron distribution and balance in the SOL. The solution is in good agreement with numerical studies using Monte-Carlo methods. The analytical solutions provide an insight to the role of different physical and geometrical processes in a collisionless SOL during disruptions and during the enhanced phase of normal operation over a wide range of parameters

  15. Fluid-structure interaction including volumetric coupling with homogenised subdomains for modeling respiratory mechanics.

    Science.gov (United States)

    Yoshihara, Lena; Roth, Christian J; Wall, Wolfgang A

    2017-04-01

    In this article, a novel approach is presented for combining standard fluid-structure interaction with additional volumetric constraints to model fluid flow into and from homogenised solid domains. The proposed algorithm is particularly interesting for investigations in the field of respiratory mechanics as it enables the mutual coupling of airflow in the conducting part and local tissue deformation in the respiratory part of the lung by means of a volume constraint. In combination with a classical monolithic fluid-structure interaction approach, a comprehensive model of the human lung can be established that will be useful to gain new insights into respiratory mechanics in health and disease. To illustrate the validity and versatility of the novel approach, three numerical examples including a patient-specific lung model are presented. The proposed algorithm proves its capability of computing clinically relevant airflow distribution and tissue strain data at a level of detail that is not yet achievable, neither with current imaging techniques nor with existing computational models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Time-invariant component-based normalization for a simultaneous PET-MR scanner.

    Science.gov (United States)

    Belzunce, M A; Reader, A J

    2016-05-07

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this

  17. Speech rate normalization used to improve speaker verification

    CSIR Research Space (South Africa)

    Van Heerden, CJ

    2006-11-01

    Full Text Available the normalized durations is then compared with the EER using unnormalized durations, and also with the EER when duration information is not employed. 2. Proposed phoneme duration modeling 2.1. Choosing parametric models Since the duration of a phoneme... the known transcription and the speaker-specific acoustic model described above. Only one pronunciation per word was allowed, thus resulting in 49 triphones. To decide which parametric model to use for the duration density func- tions of the triphones...

  18. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  19. Relating Memory To Functional Performance In Normal Aging to Dementia Using Hierarchical Bayesian Cognitive Processing Models

    Science.gov (United States)

    Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.

    2012-01-01

    Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225

  20. Global stability for infectious disease models that include immigration of infected individuals and delay in the incidence

    Directory of Open Access Journals (Sweden)

    Chelsea Uggenti

    2018-03-01

    Full Text Available We begin with a detailed study of a delayed SI model of disease transmission with immigration into both classes. The incidence function allows for a nonlinear dependence on the infected population, including mass action and saturating incidence as special cases. Due to the immigration of infectives, there is no disease-free equilibrium and hence no basic reproduction number. We show there is a unique endemic equilibrium and that this equilibrium is globally asymptotically stable for all parameter values. The results include vector-style delay and latency-style delay. Next, we show that previous global stability results for an SEI model and an SVI model that include immigration of infectives and non-linear incidence but not delay can be extended to systems with vector-style delay and latency-style delay.

  1. Mathematical modelling of liquid meniscus shape in cylindrical micro-channel for normal and micro gravity conditions

    Science.gov (United States)

    Marchuk, Igor; Lyulin, Yuriy

    2017-10-01

    Mathematical model of liquid meniscus shape in cylindrical micro-channel of the separator unit of condensing/separating system is presented. Moving liquid meniscus in the 10 μm cylindrical microchannel is used as a liquid lock to recover the liquid obtained by condensation from the separators. The main goal of the liquid locks to prevent penetration of a gas phase in the liquid line at the small flow rate of the condensate and because of pressure fluctuations in the vapor-gas-liquid loop. Calculation of the meniscus shape has been performed for liquid FC-72 at different values of pressure difference gas - liquid and under normal and micro gravity conditions.

  2. Modeling the cool down of the primary heat transport system using shut down cooling system in normal operation and after events such as LOCA

    International Nuclear Information System (INIS)

    Icleanu, D.L.; Prisecaru, I.

    2015-01-01

    This paper aims at modeling the cooling of the primary heat transport system using shutdown cooling system (SDCS), for a CANDU 6 NPP in all operating modes, normal and abnormal (particularly in case of LOCA accident), using the Flowmaster calculation code. The modelling of heavy water flow through the shutdown cooling system and primary heat transport system was performed to determine the distribution of flows, pressure in various areas of the hydraulic circuit and the pressure loss corresponding to the components but also for the heat calculation of the heat exchangers related to the system. The results of the thermo-hydraulic analysis show that in all cases analyzed, normal operation and for LOCA accident regime, the performance requirements are confirmed by analysis

  3. Updated US and Canadian normalization factors for TRACI 2.1

    DEFF Research Database (Denmark)

    Ryberg, Morten; Vieira, Marisa D. M.; Zgola, Melissa

    2014-01-01

    When LCA practitioners perform LCAs, the interpretation of the results can be difficult without a reference point to benchmark the results. Hence, normalization factors are important for relating results to a common reference. The main purpose of this paper was to update the normalization factors...... for the US and US-Canadian regions. The normalization factors were used for highlighting the most contributing substances, thereby enabling practitioners to put more focus on important substances, when compiling the inventory, as well as providing them with normalization factors reflecting the actual...... situation. Normalization factors were calculated using characterization factors from the TRACI 2.1 LCIA model. The inventory was based on US databases on emissions of substances. The Canadian inventory was based on a previous inventory with 2005 as reference, in this inventory the most significant...

  4. Divisive normalization and neuronal oscillations in a single hierarchical framework of selective visual attention

    Directory of Open Access Journals (Sweden)

    Jorrit Steven Montijn

    2012-05-01

    Full Text Available In divisive normalization models of covert attention, spike rate modulations are commonly used as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly those in gamma-band frequencies (25 to 100 Hz. Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple hierarchical cascade of normalization models simulating different cortical areas however leads to signal degradation and a loss of discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate oscillatory phase entrainment into our model, a mechanism previously proposed as the communication-through-coherence (CTC hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO model reproduces several additional spatial and temporal aspects of attentional modulation.

  5. Investigation of normal organ development with fetal MRI

    International Nuclear Information System (INIS)

    Prayer, Daniela; Brugger, Peter C.

    2007-01-01

    The understanding of the presentation of normal organ development on fetal MRI forms the basis for recognition of pathological states. During the second and third trimesters, maturational processes include changes in size, shape and signal intensities of organs. Visualization of these developmental processes requires tailored MR protocols. Further prerequisites for recognition of normal maturational states are unequivocal intrauterine orientation with respect to left and right body halves, fetal proportions, and knowledge about the MR presentation of extrafetal/intrauterine organs. Emphasis is laid on the demonstration of normal MR appearance of organs that are frequently involved in malformation syndromes. In addition, examples of time-dependent contrast enhancement of intrauterine structures are given. (orig.)

  6. Investigation of normal organ development with fetal MRI

    Energy Technology Data Exchange (ETDEWEB)

    Prayer, Daniela [Medical University of Vienna, Department of Radiology, Vienna (Austria); Brugger, Peter C. [Medical University of Vienna, Center of Anatomy and Cell Biology, Integrative Morphology Group, Vienna (Austria)

    2007-10-15

    The understanding of the presentation of normal organ development on fetal MRI forms the basis for recognition of pathological states. During the second and third trimesters, maturational processes include changes in size, shape and signal intensities of organs. Visualization of these developmental processes requires tailored MR protocols. Further prerequisites for recognition of normal maturational states are unequivocal intrauterine orientation with respect to left and right body halves, fetal proportions, and knowledge about the MR presentation of extrafetal/intrauterine organs. Emphasis is laid on the demonstration of normal MR appearance of organs that are frequently involved in malformation syndromes. In addition, examples of time-dependent contrast enhancement of intrauterine structures are given. (orig.)

  7. Modeling of the dynamics of wind to power conversion including high wind speed behavior

    DEFF Research Database (Denmark)

    Litong-Palima, Marisciel; Bjerge, Martin Huus; Cutululis, Nicolaos Antonio

    2016-01-01

    This paper proposes and validates an efficient, generic and computationally simple dynamic model for the conversion of the wind speed at hub height into the electrical power by a wind turbine. This proposed wind turbine model was developed as a first step to simulate wind power time series...... for power system studies. This paper focuses on describing and validating the single wind turbine model, and is therefore neither describing wind speed modeling nor aggregation of contributions from a whole wind farm or a power system area. The state-of-the-art is to use static power curves for the purpose...... of power system studies, but the idea of the proposed wind turbine model is to include the main dynamic effects in order to have a better representation of the fluctuations in the output power and of the fast power ramping especially because of high wind speed shutdowns of the wind turbine. The high wind...

  8. Normal Pressure Hydrocephalus (NPH)

    Science.gov (United States)

    ... local chapter Join our online community Normal Pressure Hydrocephalus (NPH) Normal pressure hydrocephalus is a brain disorder ... Symptoms Diagnosis Causes & risks Treatments About Normal Pressure Hydrocephalus Normal pressure hydrocephalus occurs when excess cerebrospinal fluid ...

  9. Are cancer cells really softer than normal cells?

    Science.gov (United States)

    Alibert, Charlotte; Goud, Bruno; Manneville, Jean-Baptiste

    2017-05-01

    Solid tumours are often first diagnosed by palpation, suggesting that the tumour is more rigid than its surrounding environment. Paradoxically, individual cancer cells appear to be softer than their healthy counterparts. In this review, we first list the physiological reasons indicating that cancer cells may be more deformable than normal cells. Next, we describe the biophysical tools that have been developed in recent years to characterise and model cancer cell mechanics. By reviewing the experimental studies that compared the mechanics of individual normal and cancer cells, we argue that cancer cells can indeed be considered as softer than normal cells. We then focus on the intracellular elements that could be responsible for the softening of cancer cells. Finally, we ask whether the mechanical differences between normal and cancer cells can be used as diagnostic or prognostic markers of cancer progression. © 2017 Société Française des Microscopies and Société de Biologie Cellulaire de France. Published by John Wiley & Sons Ltd.

  10. Asymmetry in the normal-metal to high-Tc superconductor tunnel junction

    International Nuclear Information System (INIS)

    Flensberg, K.; Hedegaard, P.; Brix, M.

    1988-01-01

    We show that the observed asymmetry in the I-V characteristics of high-T c material to normal metal junctions can be explained within the Resonating-Valence-Bond model. For a bias current with electrons moving from the superconductor to the normal metal the current is quadratic in the bias voltage, and in the opposite case with electrons moving from the normal metal to the superconductor the current is linear in V. (orig.)

  11. Current transfer between superconductor and normal layer in coated conductors

    International Nuclear Information System (INIS)

    Takacs, S

    2007-01-01

    The current transfer between superconducting stripes coated with normal layer is examined in detail. It is shown that, in present YBCO coated conductors with striations, a considerable amount of the current flowing in the normal layer is not transferred into the superconducting stripes. This effect also influences the eddy currents and the coupling currents between the stripes. The effective resistance for the coupling currents is calculated. The maximum allowable twist length of such a striated structure is given, which ensures lower losses than in the corresponding normal conductor of the same volume as the total YBCO cable (including substrate, buffer layer, superconductor and normal coating). In addition, a new simple method for determining the transfer resistance between superconducting and normal parts is proposed

  12. Modelling and control of a microgrid including photovoltaic and wind generation

    Science.gov (United States)

    Hussain, Mohammed Touseef

    Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.

  13. Subchronic Arsenic Exposure Induces Anxiety-Like Behaviors in Normal Mice and Enhances Depression-Like Behaviors in the Chemically Induced Mouse Model of Depression

    Directory of Open Access Journals (Sweden)

    Chia-Yu Chang

    2015-01-01

    Full Text Available Accumulating evidence implicates that subchronic arsenic exposure causes cerebral neurodegeneration leading to behavioral disturbances relevant to psychiatric disorders. However, there is still little information regarding the influence of subchronic exposure to arsenic-contaminated drinking water on mood disorders and its underlying mechanisms in the cerebral prefrontal cortex. The aim of this study is to assess the effects of subchronic arsenic exposure (10 mg/LAs2O3 in drinking water on the anxiety- and depression-like behaviors in normal mice and in the chemically induced mouse model of depression by reserpine pretreatment. Our findings demonstrated that 4 weeks of arsenic exposure enhance anxiety-like behaviors on elevated plus maze (EPM and open field test (OFT in normal mice, and 8 weeks of arsenic exposure augment depression-like behaviors on tail suspension test (TST and forced swimming test (FST in the reserpine pretreated mice. In summary, in this present study, we demonstrated that subchronic arsenic exposure induces only the anxiety-like behaviors in normal mice and enhances the depression-like behaviors in the reserpine induced mouse model of depression, in which the cerebral prefrontal cortex BDNF-TrkB signaling pathway is involved. We also found that eight weeks of subchronic arsenic exposure are needed to enhance the depression-like behaviors in the mouse model of depression. These findings imply that arsenic could be an enhancer of depressive symptoms for those patients who already had the attribute of depression.

  14. Importance of including small-scale tile drain discharge in the calibration of a coupled groundwater-surface water catchment model

    DEFF Research Database (Denmark)

    Hansen, Anne Lausten; Refsgaard, Jens Christian; Christensen, Britt Stenhøj Baun

    2013-01-01

    the catchment. In this study, a coupled groundwater-surface water model based on the MIKE SHE code was developed for the 4.7 km2 Lillebæk catchment in Denmark, where tile drain flow is a major contributor to the stream discharge. The catchment model was calibrated in several steps by incrementally including...... the observation data into the calibration to see the effect on model performance of including diverse data types, especially tile drain discharge. For the Lillebæk catchment, measurements of hydraulic head, daily stream discharge, and daily tile drain discharge from five small (1–4 ha) drainage areas exist....... The results showed that including tile drain data in the calibration of the catchment model improved its general performance for hydraulic heads and stream discharges. However, the model failed to correctly describe the local-scale dynamics of the tile drain discharges, and, furthermore, including the drain...

  15. Cell of origin associated classification of B-cell malignancies by gene signatures of the normal B-cell hierarchy.

    Science.gov (United States)

    Johnsen, Hans Erik; Bergkvist, Kim Steve; Schmitz, Alexander; Kjeldsen, Malene Krag; Hansen, Steen Møller; Gaihede, Michael; Nørgaard, Martin Agge; Bæch, John; Grønholdt, Marie-Louise; Jensen, Frank Svendsen; Johansen, Preben; Bødker, Julie Støve; Bøgsted, Martin; Dybkær, Karen

    2014-06-01

    Recent findings have suggested biological classification of B-cell malignancies as exemplified by the "activated B-cell-like" (ABC), the "germinal-center B-cell-like" (GCB) and primary mediastinal B-cell lymphoma (PMBL) subtypes of diffuse large B-cell lymphoma and "recurrent translocation and cyclin D" (TC) classification of multiple myeloma. Biological classification of B-cell derived cancers may be refined by a direct and systematic strategy where identification and characterization of normal B-cell differentiation subsets are used to define the cancer cell of origin phenotype. Here we propose a strategy combining multiparametric flow cytometry, global gene expression profiling and biostatistical modeling to generate B-cell subset specific gene signatures from sorted normal human immature, naive, germinal centrocytes and centroblasts, post-germinal memory B-cells, plasmablasts and plasma cells from available lymphoid tissues including lymph nodes, tonsils, thymus, peripheral blood and bone marrow. This strategy will provide an accurate image of the stage of differentiation, which prospectively can be used to classify any B-cell malignancy and eventually purify tumor cells. This report briefly describes the current models of the normal B-cell subset differentiation in multiple tissues and the pathogenesis of malignancies originating from the normal germinal B-cell hierarchy.

  16. Multi-source waveform inversion of marine streamer data using the normalized wavefield

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    Even though the encoded multi-source approach dramatically reduces the computational cost of waveform inversion, it is generally not applicable to marine streamer data. This is because the simultaneous-sources modeled data cannot be muted to comply with the configuration of the marine streamer data, which causes differences in the number of stacked-traces, or energy levels, between the modeled and observed data. Since the conventional L2 norm does not account for the difference in energy levels, multi-source inversion based on the conventional L2 norm does not work for marine streamer data. In this study, we propose the L2, approximated L2, and L1 norm using the normalized wavefields for the multi-source waveform inversion of marine streamer data. Since the normalized wavefields mitigate the different energy levels between the observed and modeled wavefields, the multi-source waveform inversion using the normalized wavefields can be applied to marine streamer data. We obtain the gradient of the objective functions using the back-propagation algorithm. To conclude, the gradient of the L2 norm using the normalized wavefields is exactly the same as that of the global correlation norm. In the numerical examples, the new objective functions using the normalized wavefields generate successful results whereas conventional L2 norm does not.

  17. Prediction of normalized biodiesel properties by simulation of multiple feedstock blends.

    Science.gov (United States)

    García, Manuel; Gonzalo, Alberto; Sánchez, José Luis; Arauzo, Jesús; Peña, José Angel

    2010-06-01

    A continuous process for biodiesel production has been simulated using Aspen HYSYS V7.0 software. As fresh feed, feedstocks with a mild acid content have been used. The process flowsheet follows a traditional alkaline transesterification scheme constituted by esterification, transesterification and purification stages. Kinetic models taking into account the concentration of the different species have been employed in order to simulate the behavior of the CSTR reactors and the product distribution within the process. The comparison between experimental data found in literature and the predicted normalized properties, has been discussed. Additionally, a comparison between different thermodynamic packages has been performed. NRTL activity model has been selected as the most reliable of them. The combination of these models allows the prediction of 13 out of 25 parameters included in standard EN-14214:2003, and confers simulators a great value as predictive as well as optimization tool. (c) 2010 Elsevier Ltd. All rights reserved.

  18. Pursuing Normality

    DEFF Research Database (Denmark)

    Madsen, Louise Sofia; Handberg, Charlotte

    2018-01-01

    implying an influence on whether to participate in cancer survivorship care programs. Because of "pursuing normality," 8 of 9 participants opted out of cancer survivorship care programming due to prospects of "being cured" and perceptions of cancer survivorship care as "a continuation of the disease......BACKGROUND: The present study explored the reflections on cancer survivorship care of lymphoma survivors in active treatment. Lymphoma survivors have survivorship care needs, yet their participation in cancer survivorship care programs is still reported as low. OBJECTIVE: The aim of this study...... was to understand the reflections on cancer survivorship care of lymphoma survivors to aid the future planning of cancer survivorship care and overcome barriers to participation. METHODS: Data were generated in a hematological ward during 4 months of ethnographic fieldwork, including participant observation and 46...

  19. LROC WAC 100 Meter Scale Photometrically Normalized Map of the Moon

    Science.gov (United States)

    Boyd, A. K.; Nuno, R. G.; Robinson, M. S.; Denevi, B. W.; Hapke, B. W.

    2013-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) monthly global observations allowed derivation of a robust empirical photometric solution over a broad range of incidence, emission and phase (i, e, g) angles. Combining the WAC stereo-based GLD100 [1] digital terrain model (DTM) and LOLA polar DTMs [2] enabled precise topographic corrections to photometric angles. Over 100,000 WAC observations at 643 nm were calibrated to reflectance (I/F). Photometric angles (i, e, g), latitude, and longitude were calculated and stored for each WAC pixel. The 6-dimensional data set was then reduced to 3 dimensions by photometrically normalizing I/F with a global solution similar to [3]. The global solution was calculated from three 2°x2° tiles centered on (1°N, 147°E), (45°N, 147°E), and (89°N, 147°E), and included over 40 million WAC pixels. A least squares fit to a multivariate polynomial of degree 4 (f(i,e,g)) was performed, and the result was the starting point for a minimum search solving the non-linear function min[{1-[ I/F / f(i,e,g)] }2]. The input pixels were filtered to incidence angles (calculated from topography) shadowed pixels, and the output normalized I/F values were gridded into an equal-area map projection at 100 meters/pixel. At each grid location the median, standard deviation, and count of valid pixels were recorded. The normalized reflectance map is the result of the median of all normalized WAC pixels overlapping that specific 100-m grid cell. There are an average of 86 WAC normalized I/F estimates at each cell [3]. The resulting photometrically normalized mosaic provides the means to accurately compare I/F values for different regions on the Moon (see Nuno et al. [4]). The subtle differences in normalized I/F can now be traced across the local topography at regions that are illuminated at any point during the LRO mission (while the WAC was imaging), including at polar latitudes. This continuous map of reflectance at 643 nm

  20. Volatility Components, Affine Restrictions and Non-Normal Innovations

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Jacobs, Kris; Dorian, Christian

    Recent work by Engle and Lee (1999) shows that allowing for long-run and short-run components greatly enhances a GARCH model's ability fit daily equity return dynamics. Using the risk-neutralization in Duan (1995), we assess the option valuation performance of the Engle-Lee model and compare...... models to four conditionally non-normal versions. As in Hsieh and Ritchken (2005), we find that non-affine models dominate affine models both in terms of fitting return and in terms of option valuation. For the affine models we find strong evidence in favor of the component structure for both returns...

  1. Development of realistic concrete models including scaling effects

    International Nuclear Information System (INIS)

    Carpinteri, A.

    1989-09-01

    Progressive cracking in structural elements of concrete is considered. Two simple models are applied, which, even though different, lead to similar predictions for the fracture behaviour. Both Virtual Crack Propagation Model and Cohesive Limit Analysis (Section 2), show a trend towards brittle behaviour and catastrophical events for large structural sizes. A numerical Cohesive Crack Model is proposed (Section 3) to describe strain softening and strain localization in concrete. Such a model is able to predict the size effects of fracture mechanics accurately. Whereas for Mode I, only untieing of the finite element nodes is applied to simulate crack growth, for Mixed Mode a topological variation is required at each step (Section 4). In the case of the four point shear specimen, the load vs. deflection diagrams reveal snap-back instability for large sizes. By increasing the specimen sizes, such instability tends to reproduce the classical LEFM instability. Remarkable size effects are theoretically predicted and experimentally confirmed also for reinforced concrete (Section 5). The brittleness of the flexural members increases by increasing size and/or decreasing steel content. On the basis of these results, the empirical code rules regarding the minimum amount of reinforcement could be considerably revised

  2. Dissociative Functions in the Normal Mourning Process.

    Science.gov (United States)

    Kauffman, Jeffrey

    1994-01-01

    Sees dissociative functions in mourning process as occurring in conjunction with integrative trends. Considers initial shock reaction in mourning as model of normal dissociation in mourning process. Dissociation is understood to be related to traumatic significance of death in human consciousness. Discerns four psychological categories of…

  3. Interlayer material transport during layer-normal shortening. Part I. The model

    NARCIS (Netherlands)

    Molen, I. van der

    1985-01-01

    To analyse mass-transfer during deformation, the case is considered of a multilayer experiencing a layer-normal shortening that is volume constant on the scale of many layers. Strain rate is homogeneously distributed on the layer-scale if diffusion is absent; when transport of matter between the

  4. Log-normality of indoor radon data in the Walloon region of Belgium

    International Nuclear Information System (INIS)

    Cinelli, Giorgia; Tondeur, François

    2015-01-01

    The deviations of the distribution of Belgian indoor radon data from the log-normal trend are examined. Simulated data are generated to provide a theoretical frame for understanding these deviations. It is shown that the 3-component structure of indoor radon (radon from subsoil, outdoor air and building materials) generates deviations in the low- and high-concentration tails, but this low-C trend can be almost completely compensated by the effect of measurement uncertainties and by possible small errors in background subtraction. The predicted low-C and high-C deviations are well observed in the Belgian data, when considering the global distribution of all data. The agreement with the log-normal model is improved when considering data organised in homogeneous geological groups. As the deviation from log-normality is often due to the low-C tail for which there is no interest, it is proposed to use the log-normal fit limited to the high-C half of the distribution. With this prescription, the vast majority of the geological groups of data are compatible with the log-normal model, the remaining deviations being mostly due to a few outliers, and rarely to a “fat tail”. With very few exceptions, the log-normal modelling of the high-concentration part of indoor radon data is expected to give reasonable results, provided that the data are organised in homogeneous geological groups. - Highlights: • Deviations of the distribution of Belgian indoor Rn data from the log-normal trend. • 3-component structure of indoor Rn: subsoil, outdoor air and building materials. • Simulated data generated to provide a theoretical frame for understanding deviations. • Data organised in homogeneous geological groups; better agreement with the log-normal

  5. Difference in blood microcirculation recovery between normal frostbite and high-altitude frostbite

    Directory of Open Access Journals (Sweden)

    Ming-ke JIAO

    2017-02-01

    Full Text Available Objective To determine the difference in blood microcirculation recovery between normal frostbite and high-altitude frostbite during the wound healing. Methods Twenty four male rats were randomly divided into control group (n=8, normal frostbite group (n=8, and high-altitude group (n=8. The normal frostbite group rats were frozen to produce mid-degree frostbite models by controlling the freezing time with liquid nitrogen penetration equipment. The high-altitude frostbite group rats were acclimated to a hypoxic and low-pressure environment for 1 week, and then the high-altitude frostbite models were constructed by the same way with liquid nitrogen penetration apparatus. On days 3, 7, 11, 15, 19, and 23 after modeling, the recovery situation of blood circulation of each group was observed with contrast ultrasonography by injecting SonoVue micro-bubble into rats' tail. Finally, the micro-bubble concentration (MC was calculated to confirm the blood circulation recovery with software Image Pro. Results At different time points, the wound area of the high-altitude frostbite group was bigger than that of the normal frostbite group, and the MC of control group was always about (27±0.2×109/ml. On day 3, 7, 11, 15, 19, and 23, the MC was significantly lower in the high-altitude frostbite group than in the control group and normal frostbite group (P<0.05. The MC of normal frostbite group was significantly lower than that of the control group on day 3, 7, 11, 15 and 19 (P<0.05. In addition, no obvious difference in MC was found between normal group and control group on the 23th day (P<0.05. Conclusion The blood microcirculation recovery after high-altitude frostbite is significantly slower than the normal frostbite. DOI: 10.11855/j.issn.0577-7402.2017.01.13

  6. Point kinetics modeling

    International Nuclear Information System (INIS)

    Kimpland, R.H.

    1996-01-01

    A normalized form of the point kinetics equations, a prompt jump approximation, and the Nordheim-Fuchs model are used to model nuclear systems. Reactivity feedback mechanisms considered include volumetric expansion, thermal neutron temperature effect, Doppler effect and void formation. A sample problem of an excursion occurring in a plutonium solution accidentally formed in a glovebox is presented

  7. Diagnostic imaging features of normal anal sacs in dogs and cats.

    Science.gov (United States)

    Jung, Yechan; Jeong, Eunseok; Park, Sangjun; Jeong, Jimo; Choi, Ul Soo; Kim, Min-Su; Kim, Namsoo; Lee, Kichang

    2016-09-30

    This study was conducted to provide normal reference features for canine and feline anal sacs using ultrasound, low-field magnetic resonance imaging (MRI) and radiograph contrast as diagnostic imaging tools. A total of ten clinically normal beagle dogs and eight clinically normally cats were included. General radiography with contrast, ultrasonography and low-field MRI scans were performed. The visualization of anal sacs, which are located at distinct sites in dogs and cats, is possible with a contrast study on radiography. Most surfaces of the anal sacs tissue, occasionally appearing as a hyperechoic thin line, were surrounded by the hypoechoic external sphincter muscle on ultrasonography. The normal anal sac contents of dogs and cats had variable echogenicity. Signals of anal sac contents on low-field MRI varied in cats and dogs, and contrast medium using T1-weighted images enhanced the anal sac walls more obviously than that on ultrasonography. In conclusion, this study provides the normal features of anal sacs from dogs and cats on diagnostic imaging. Further studies including anal sac evaluation are expected to investigate disease conditions.

  8. Metabolomic Evidence for a Field Effect in Histologically Normal and Metaplastic Tissues in Patients with Esophageal Adenocarcinoma

    Directory of Open Access Journals (Sweden)

    Michelle A.C. Reed

    2017-03-01

    Full Text Available Patients with Barrett's esophagus (BO are at increased risk of developing esophageal adenocarcinoma (EAC. Most Barrett's patients, however, do not develop EAC, and there is a need for markers that can identify those most at risk. This study aimed to see if a metabolic signature associated with the development of EAC existed. For this, tissue extracts from patients with EAC, BO, and normal esophagus were analyzed using 1H nuclear magnetic resonance. Where possible, adjacent histologically normal tissues were sampled in those with EAC and BO. The study included 46 patients with EAC, 7 patients with BO, and 68 controls who underwent endoscopy for dyspeptic symptoms with normal appearances. Within the cancer cohort, 9 patients had nonneoplastic Barrett's adjacent to the cancer suitable for biopsy. It was possible to distinguish between histologically normal, BO, and EAC tissue in EAC patients [area under the receiver operator curve (AUROC 1.00, 0.86, and 0.91] and between histologically benign BO in the presence and absence of EAC (AUROC 0.79. In both these cases, sample numbers limited the power of the models. Comparison of histologically normal tissue proximal to EAC versus that from controls (AUROC 1.00 suggests a strong field effect which may develop prior to overt EAC and hence be useful for identifying patients at high risk of developing EAC. Excellent sensitivity and specificity were found for this model to distinguish histologically normal squamous esophageal mucosa in EAC patients and healthy controls, with 8 metabolites being very significantly altered. This may have potential diagnostic value if a molecular signature can detect tissue from which neoplasms subsequently arise.

  9. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  10. A log-sinh transformation for data normalization and variance stabilization

    Science.gov (United States)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  11. Comparison of the predictions of the LQ and CRE models for normal tissue damage due to biologically targeted radiotherapy with exponentially decaying dose rates

    International Nuclear Information System (INIS)

    O'Donoghue, J.A.; West of Schotland Health Boards, Glasgow

    1989-01-01

    For biologically targeted radiotherapy organ dose rates may be complex functions of time, related to the biodistribution kinetics of the delivery vehicle and radiolabel. The simples situation is where dose rates are exponentially decaying functions of time. Two normal tissue isoeffect models enable the effects of exponentially decaying dose rates to be addressed. These are the extension of the linear-quadratic model and the cumulative radiation effect model. This communication will compare the predictions of these models. (author). 14 refs.; 1 fig

  12. Differentiating cancerous from normal breast tissue by redox imaging

    Science.gov (United States)

    Xu, He N.; Tchou, Julia; Feng, Min; Zhao, Huaqing; Li, Lin Z.

    2015-02-01

    Abnormal metabolism can be a hallmark of cancer occurring early before detectable histological changes and may serve as an early detection biomarker. The current gold standard to establish breast cancer (BC) diagnosis is histological examination of biopsy. Previously we have found that pre-cancer and cancer tissues in animal models displayed abnormal mitochondrial redox state. Our technique of quantitatively measuring the mitochondrial redox state has the potential to be implemented as an early detection tool for cancer and may provide prognostic value. We therefore in this present study, investigated the feasibility of quantifying the redox state of tumor samples from 16 BC patients. Tumor tissue aliquots were collected from both normal and cancerous tissue from the affected cancer-bearing breasts of 16 female patients (5 TNBC, 9 ER+, 2 ER+/Her2+) shortly after surgical resection. All specimens were snap-frozen with liquid nitrogen on site and scanned later with the Chance redox scanner, i.e., the 3D cryogenic NADH/oxidized flavoprotein (Fp) fluorescence imager. Our preliminary results showed that both NADH and Fp (including FAD, i.e., flavin adenine dinucleotide) signals in the cancerous tissues roughly tripled to quadrupled those in the normal tissues (pcancerous tissues than in the normal ones (pcancer and non-cancer breast tissues in human patients and this novel redox scanning procedure may assist in tissue diagnosis in freshly procured biopsy samples prior to tissue fixation. We are in the process of evaluating the prognostic value of the redox imaging indices for BC.

  13. Analysis of assistance procedures to normal birth in primiparous

    Directory of Open Access Journals (Sweden)

    Joe Luiz Vieira Garcia Novo

    2016-04-01

    Full Text Available Introduction: Current medical technologies in care in birth increased maternal and fetal benefits persist, despite numerous unnecessary procedures. The purpose of the normal childbirth care is to have healthy women and newborns, using a minimum of safe interventions. Objective: To analyze the assistance to normal delivery in secondary care maternity. Methodology: A total of 100 primiparous mothers who had vaginal delivery were included, in which care practices used were categorized: 1 according to the WHO classification for assistance to normal childbirth: effective, harmful, used with caution and used inappropriately; 2 associating calculations with the Bologna Index parameters: presence of a birth partner, partograph, no stimulation of labor, delivery in non-supine position, and mother-newborn skin-to-skin contact. Results: Birth partners (85%, correctly filled partographs (62%, mother-newborn skin-to-skin contact (36%, use of oxytocin (87%, use of parenteral nutrition during labor (86% and at delivery (74%, episiotomy (94% and uterine fundal pressure in the expulsion stage (58%. The overall average value of the Bologna Index of the mothers analyzed was 1.95. Conclusions: Some effective procedures recommended by WHO (presence of a birth partner, some effective and mandatory practices were not complied with (partograph completely filled, potentially harmful or ineffective procedures were used (oxytocin in labor/post-partum, as well as inadequate procedures (uterine fundal pressure during the expulsion stage, use of forceps and episiotomy. The maternity’s care model did not offer excellence procedures in natural birth to their mothers in primiparity, (BI=1.95.

  14. Diverse complexities, complex diversities: Resisting ‘normal science’ in pedagogical and research methodologies. A perspective from Aotearoa (New Zealand

    Directory of Open Access Journals (Sweden)

    Ritchie Jenny

    2016-06-01

    Full Text Available This paper offers an overview of complexities of the contexts for education in Aotearoa, which include the need to recognise and include Māori (Indigenous perspectives, but also to extend this inclusion to the context of increasing ethnic diversity. These complexities include the situation of worsening disparities between rich and poor which disproportionately position Māori and those from Pacific Island backgrounds in situations of poverty. It then offers a brief critique of government policies before providing some examples of models that resist ‘normal science’ categorisations. These include: the Māori values underpinning the effective teachers’ profile of the Kotahitanga project and of the Māori assessment model for early childhood education; the dispositions identified in a Samoan model for assessing young children’s learning; and the approach developed for assessing Māori children’s literacy and numeracy within schools where Māori language is the medium of instruction. These models all position learning within culturally relevant frames that are grounded in non-Western onto-epistemologies which include spiritual, cultural, and collective aspirations.

  15. Exploring the experiences of older Chinese adults with comorbidities including diabetes: surmounting these challenges in order to live a normal life

    Directory of Open Access Journals (Sweden)

    Ho HY

    2018-01-01

    Full Text Available Hsiu-Yu Ho,1,2 Mei-Hui Chen,2,3 Meei-Fang Lou1 1School of Nursing, College of Medicine, National Taiwan University, 2Department of Nursing, Yuanpei University of Medical Technology, 3National Taipei University of Nursing and Health Sciences, Taipei, Taiwan, Republic of China Background: Many people with diabetes have comorbidities, even multimorbidities, which have a far-reaching impact on the older adults, their family, and society. However, little is known of the experience of older adults living with comorbidities that include diabetes. Aim: The aim of this study was to explore the experience of older adults living with comorbidities including diabetes. Methods: A qualitative approach was employed. Data were collected from a selected field of 12 patients with diabetes mellitus in a medical center in northern Taiwan. The data were analyzed by Colaizzi’s phenomenological methodology, and four criteria of Lincoln and Guba were used to evaluate the rigor of the study. Results: The following 5 themes and 14 subthemes were derived: 1 expecting to heal or reduce the symptoms of the disease (trying to alleviate the distress of symptoms and trusting in health practitioners combining the use of Chinese and Western medicines; 2 comparing complex medical treatments (differences in physician practices and presentation, conditionally adhering to medical treatment, and partnering with medical professionals; 3 inconsistent information (inconsistent health information and inconsistent medical advice; 4 impacting on daily life (activities are limited and hobbies cannot be maintained and psychological distress; and 5 weighing the pros and cons (taking the initiative to deal with issues, limiting activity, adjusting mental outlook and pace of life, developing strategies for individual health regimens, and seeking support. Surmounting these challenges in order to live a normal life was explored. Conclusion: This study found that the experience of older adults

  16. Effect of normal processes on thermal conductivity of germanium ...

    Indian Academy of Sciences (India)

    Abstract. The effect of normal scattering processes is considered to redistribute the phonon momentum in (a) the same phonon branch – KK-S model and (b) between differ- ent phonon branches – KK-H model. Simplified thermal conductivity relations are used to estimate the thermal conductivity of germanium, silicon and ...

  17. Univariate normalization of bispectrum using Hölder's inequality.

    Science.gov (United States)

    Shahbazi, Forooz; Ewald, Arne; Nolte, Guido

    2014-08-15

    Considering that many biological systems including the brain are complex non-linear systems, suitable methods capable of detecting these non-linearities are required to study the dynamical properties of these systems. One of these tools is the third order cummulant or cross-bispectrum, which is a measure of interfrequency interactions between three signals. For convenient interpretation, interaction measures are most commonly normalized to be independent of constant scales of the signals such that its absolute values are bounded by one, with this limit reflecting perfect coupling. Although many different normalization factors for cross-bispectra were suggested in the literature these either do not lead to bounded measures or are themselves dependent on the coupling and not only on the scale of the signals. In this paper we suggest a normalization factor which is univariate, i.e., dependent only on the amplitude of each signal and not on the interactions between signals. Using a generalization of Hölder's inequality it is proven that the absolute value of this univariate bicoherence is bounded by zero and one. We compared three widely used normalizations to the univariate normalization concerning the significance of bicoherence values gained from resampling tests. Bicoherence values are calculated from real EEG data recorded in an eyes closed experiment from 10 subjects. The results show slightly more significant values for the univariate normalization but in general, the differences are very small or even vanishing in some subjects. Therefore, we conclude that the normalization factor does not play an important role in the bicoherence values with regard to statistical power, although a univariate normalization is the only normalization factor which fulfills all the required conditions of a proper normalization. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Identical bands at normal deformation: Necessity of going beyond the mean-field approach

    International Nuclear Information System (INIS)

    Sun, Y.; Wu, C.; Feng, D.H.; Egido, J.L.; Guidry, M.

    1996-01-01

    The validity of BCS theory has been questioned because the appearance of normally deformed identical bands in odd and even nuclei seems to contradict the conventional understanding of the blocking effect. This problem is examined with the projected shell model (PSM), which projects good angular momentum states and includes many-body correlations in both deformation and pairing channels. Satisfactory reproduction of identical band data by the PSM suggests that it may be necessary to go beyond the mean field to obtain a quantitative account of identical bands. copyright 1996 The American Physical Society

  19. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    Science.gov (United States)

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (Pdonation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  20. Measurements of normal joint angles by goniometry in calves.

    Science.gov (United States)

    Sengöz Şirin, O; Timuçin Celik, M; Ozmen, A; Avki, S

    2014-01-01

    The aim of this study was to establish normal reference values of the forelimb and hindlimb joint angles in normal Holstein calves. Thirty clinically normal Holstein calves that were free of any detectable musculoskeletal abnormalities were included in the study. A standard transparent plastic goniometer was used to measure maximum flexion, maximum extension, and range-of-motion of the shoulder, elbow, carpal, hip, stifle, and tarsal joints. The goniometric measurements were done on awake calves that were positioned in lateral recumbency. The goniometric values were measured and recorded by two independent investigators. As a result of the study it was concluded that goniometric values obtained from awake calves in lateral recumbency were found to be highly consistent and accurate between investigators (p <0.05). The data of this study acquired objective and useful information on the normal forelimb and hindlimb joint angles in normal Holstein calves. Further studies can be done to predict detailed goniometric values from different diseases and compare them.