WorldWideScience

Sample records for normal density assumption

  1. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  3. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  4. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  6. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  7. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    Science.gov (United States)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  8. The triangular density to approximate the normal density: decision rules-of-thumb

    International Nuclear Information System (INIS)

    Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.

    2003-01-01

    In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications

  9. Testing the assumption of normality in body sway area calculations during unipedal stance tests with an inertial sensor.

    Science.gov (United States)

    Kyoung Jae Kim; Lucarevic, Jennifer; Bennett, Christopher; Gaunaurd, Ignacio; Gailey, Robert; Agrawal, Vibhor

    2016-08-01

    The quantification of postural sway during the unipedal stance test is one of the essentials of posturography. A shift of center of pressure (CoP) is an indirect measure of postural sway and also a measure of a person's ability to maintain balance. A widely used method in laboratory settings to calculate the sway of body center of mass (CoM) is through an ellipse that encloses 95% of CoP trajectory. The 95% ellipse can be computed under the assumption that the spatial distribution of the CoP points recorded from force platforms is normal. However, to date, this assumption of normality has not been demonstrated for sway measurements recorded from a sacral inertial measurement unit (IMU). This work provides evidence for non-normality of sway trajectories calculated at a sacral IMU with injured subjects as well as healthy subjects.

  10. Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20.

    Science.gov (United States)

    Michael, J Robert; Volkov, Anatoliy

    2015-03-01

    The widely used pseudoatom formalism [Stewart (1976). Acta Cryst. A32, 565-574; Hansen & Coppens (1978). Acta Cryst. A34, 909-921] in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens [Acta Cryst. (1988), A44, 6-7]. It was shown that the analytical form for normalization coefficients is available primarily for l ≤ 4 [Hansen & Coppens, 1978; Paturle & Coppens, 1988; Coppens (1992). International Tables for Crystallography, Vol. B, Reciprocal space, 1st ed., edited by U. Shmueli, ch. 1.2. Dordrecht: Kluwer Academic Publishers; Coppens (1997). X-ray Charge Densities and Chemical Bonding. New York: Oxford University Press]. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle-Coppens (Paturle & Coppens, 1988) method in the Wolfram Mathematica software to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.

  11. Sensitivity of probabilistic MCO water content estimates to key assumptions

    International Nuclear Information System (INIS)

    DUNCAN, D.R.

    1999-01-01

    Sensitivity of probabilistic multi-canister overpack (MCO) water content estimates to key assumptions is evaluated with emphasis on the largest non-cladding film-contributors, water borne by particulates adhering to damage sites, and water borne by canister particulate. Calculations considered different choices of damage state degree of independence, different choices of percentile for reference high inputs, three types of input probability density function (pdfs): triangular, log-normal, and Weibull, and the number of scrap baskets in an MCO

  12. The Effect of Multicollinearity and the Violation of the Assumption of Normality on the Testing of Hypotheses in Regression Analysis.

    Science.gov (United States)

    Vasu, Ellen S.; Elmore, Patricia B.

    The effects of the violation of the assumption of normality coupled with the condition of multicollinearity upon the outcome of testing the hypothesis Beta equals zero in the two-predictor regression equation is investigated. A monte carlo approach was utilized in which three differenct distributions were sampled for two sample sizes over…

  13. Cholesterol transfer from normal and atherogenic low density lipoproteins to Mycoplasma membranes

    International Nuclear Information System (INIS)

    Mitschelen, J.J.; St Clair, R.W.; Hester, S.H.

    1981-01-01

    The purpose of this study was to determine whether the free cholesterol of hypercholesterolemic low density lipoprotein from cholesterol-fed nonhuman primates has a greater potential for surface transfer to cell membranes than does the free cholesterol of normal low density lipoprotein. The low density lipoproteins were isolated from normal and hypercholesterolemic rhesus and cynomolgus monkeys, incubated with membranes from Acholeplasma laidlawii, a mycoplasma species devoid of cholesterol in its membranes, and the mass transfer of free cholesterol determined by measuring membrane cholesterol content. Since these membranes neither synthesize nor esterify cholesterol, nor degrade the protein or cholesterol ester moieties of low density lipoprotein, they are an ideal model with which to study differences in the cholesterol transfer potential of low density lipoprotein independent of the uptake of the intact low density lipoprotein particle. These studies indicate that, even though there are marked differences in the cholesterol composition of normal and hypercholesterolemic low density lipoproteins, this does not result in a greater chemical potential for surface transfer of free cholesterol. Consequently, if a difference in the surface transfer of free cholesterol is responsible for the enhanced ability of hypercholesterolemic low density lipoprotein to promote cellular cholesterol accumulation and, perhaps, also atherosclerosis, it must be the result of differences in the interaction to the hypercholesterolemic low density lipoprotein with the more complicated mammalian cell membranes, rather than differences in the chemical potential for cholesterol transfer

  14. Normal bone density in male pseudohermaphroditism due to 5a- reductase 2 deficiency

    Directory of Open Access Journals (Sweden)

    Costa Elaine Maria Frade

    2001-01-01

    Full Text Available Bone is an androgen-dependent tissue, but it is not clear whether the androgen action in bone depends on testosterone or on dihydrotestosterone. Patients with 5alpha-reductase 2 deficiency present normal levels of testosterone and low levels of dihydrotestosterone, providing an in vivo human model for the analysis of the effect of testosterone on bone. OBJECTIVE: To analyze bone mineral density in 4 adult patients with male pseudohermaphroditism due to 5alpha-reductase 2 deficiency. RESULTS: Three patients presented normal bone mineral density of the lumbar column (L1-L4 and femur neck, and the other patient presented a slight osteopenia in the lumbar column. CONCLUSION: Patients with dihydrotestosterone deficiency present normal bone mineral density, suggesting that dihydrotestosterone is not the main androgen acting in bone.

  15. The incompressibility assumption in computational simulations of nasal airflow.

    Science.gov (United States)

    Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel

    2017-06-01

    Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.

  16. Body fat assessed from body density and estimated from skinfold thickness in normal children and children with cystic fibrosis.

    Science.gov (United States)

    Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A

    1988-12-01

    Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.

  17. Mineral density volume gradients in normal and diseased human tissues.

    Directory of Open Access Journals (Sweden)

    Sabra I Djomehri

    Full Text Available Clinical computed tomography provides a single mineral density (MD value for heterogeneous calcified tissues containing early and late stage pathologic formations. The novel aspect of this study is that, it extends current quantitative methods of mapping mineral density gradients to three dimensions, discretizes early and late mineralized stages, identifies elemental distribution in discretized volumes, and correlates measured MD with respective calcium (Ca to phosphorus (P and Ca to zinc (Zn elemental ratios. To accomplish this, MD variations identified using polychromatic radiation from a high resolution micro-computed tomography (micro-CT benchtop unit were correlated with elemental mapping obtained from a microprobe X-ray fluorescence (XRF using synchrotron monochromatic radiation. Digital segmentation of tomograms from normal and diseased tissues (N=5 per group; 40-60 year old males contained significant mineral density variations (enamel: 2820-3095 mg/cc, bone: 570-1415 mg/cc, cementum: 1240-1340 mg/cc, dentin: 1480-1590 mg/cc, cementum affected by periodontitis: 1100-1220 mg/cc, hypomineralized carious dentin: 345-1450 mg/cc, hypermineralized carious dentin: 1815-2740 mg/cc, and dental calculus: 1290-1770 mg/cc. A plausible linear correlation between segmented MD volumes and elemental ratios within these volumes was established, and Ca/P ratios for dentin (1.49, hypomineralized dentin (0.32-0.46, cementum (1.51, and bone (1.68 were observed. Furthermore, varying Ca/Zn ratios were distinguished in adapted compared to normal tissues, such as in bone (855-2765 and in cementum (595-990, highlighting Zn as an influential element in prompting observed adaptive properties. Hence, results provide insights on mineral density gradients with elemental concentrations and elemental footprints that in turn could aid in elucidating mechanistic processes for pathologic formations.

  18. Mineral Density Volume Gradients in Normal and Diseased Human Tissues

    Science.gov (United States)

    Djomehri, Sabra I.; Candell, Susan; Case, Thomas; Browning, Alyssa; Marshall, Grayson W.; Yun, Wenbing; Lau, S. H.; Webb, Samuel; Ho, Sunita P.

    2015-01-01

    Clinical computed tomography provides a single mineral density (MD) value for heterogeneous calcified tissues containing early and late stage pathologic formations. The novel aspect of this study is that, it extends current quantitative methods of mapping mineral density gradients to three dimensions, discretizes early and late mineralized stages, identifies elemental distribution in discretized volumes, and correlates measured MD with respective calcium (Ca) to phosphorus (P) and Ca to zinc (Zn) elemental ratios. To accomplish this, MD variations identified using polychromatic radiation from a high resolution micro-computed tomography (micro-CT) benchtop unit were correlated with elemental mapping obtained from a microprobe X-ray fluorescence (XRF) using synchrotron monochromatic radiation. Digital segmentation of tomograms from normal and diseased tissues (N=5 per group; 40-60 year old males) contained significant mineral density variations (enamel: 2820-3095mg/cc, bone: 570-1415mg/cc, cementum: 1240-1340mg/cc, dentin: 1480-1590mg/cc, cementum affected by periodontitis: 1100-1220mg/cc, hypomineralized carious dentin: 345-1450mg/cc, hypermineralized carious dentin: 1815-2740mg/cc, and dental calculus: 1290-1770mg/cc). A plausible linear correlation between segmented MD volumes and elemental ratios within these volumes was established, and Ca/P ratios for dentin (1.49), hypomineralized dentin (0.32-0.46), cementum (1.51), and bone (1.68) were observed. Furthermore, varying Ca/Zn ratios were distinguished in adapted compared to normal tissues, such as in bone (855-2765) and in cementum (595-990), highlighting Zn as an influential element in prompting observed adaptive properties. Hence, results provide insights on mineral density gradients with elemental concentrations and elemental footprints that in turn could aid in elucidating mechanistic processes for pathologic formations. PMID:25856386

  19. Normalizing Heterosexuality: Mothers' Assumptions, Talk, and Strategies with Young Children

    Science.gov (United States)

    Martin, Karin A.

    2009-01-01

    In recent years, social scientists have identified not just heterosexism and homophobia as social problems, but also heteronormativity--the mundane, everyday ways that heterosexuality is privileged and taken for granted as normal and natural. There is little empirical research, however, on how heterosexuality is reproduced and then normalized for…

  20. Signs of Gas Trapping in Normal Lung Density Regions in Smokers.

    Science.gov (United States)

    Bodduluri, Sandeep; Reinhardt, Joseph M; Hoffman, Eric A; Newell, John D; Nath, Hrudaya; Dransfield, Mark T; Bhatt, Surya P

    2017-12-01

    A substantial proportion of subjects without overt airflow obstruction have significant respiratory morbidity and structural abnormalities as visualized by computed tomography. Whether regions of the lung that appear normal using traditional computed tomography criteria have mild disease is not known. To identify subthreshold structural disease in normal-appearing lung regions in smokers. We analyzed 8,034 subjects with complete inspiratory and expiratory computed tomographic data participating in the COPDGene Study, including 103 lifetime nonsmokers. The ratio of the mean lung density at end expiration (E) to end inspiration (I) was calculated in lung regions with normal density (ND) by traditional thresholds for mild emphysema (-910 Hounsfield units) and gas trapping (-856 Hounsfield units) to derive the ND-E/I ratio. Multivariable regression analysis was used to measure the associations between ND-E/I, lung function, and respiratory morbidity. The ND-E/I ratio was greater in smokers than in nonsmokers, and it progressively increased from mild to severe chronic obstructive pulmonary disease severity. A proportion of 26.3% of smokers without airflow obstruction had ND-E/I greater than the 90th percentile of normal. ND-E/I was independently associated with FEV 1 (adjusted β = -0.020; 95% confidence interval [CI], -0.032 to -0.007; P = 0.001), St. George's Respiratory Questionnaire scores (adjusted β = 0.952; 95% CI, 0.529 to 1.374; P smokers without airflow obstruction, and it is associated with respiratory morbidity. Clinical trial registered with www.clinicaltrials.gov (NCT00608764).

  1. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  2. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  3. Forecasting Value-at-Risk under Different Distributional Assumptions

    Directory of Open Access Journals (Sweden)

    Manuela Braione

    2016-01-01

    Full Text Available Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR. We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.

  4. A framework for the organizational assumptions underlying safety culture

    International Nuclear Information System (INIS)

    Packer, Charles

    2002-01-01

    The safety culture of the nuclear organization can be addressed at the three levels of culture proposed by Edgar Schein. The industry literature provides a great deal of insight at the artefact and espoused value levels, although as yet it remains somewhat disorganized. There is, however, an overall lack of understanding of the assumption level of safety culture. This paper describes a possible framework for conceptualizing the assumption level, suggesting that safety culture is grounded in unconscious beliefs about the nature of the safety problem, its solution and how to organize to achieve the solution. Using this framework, the organization can begin to uncover the assumptions at play in its normal operation, decisions and events and, if necessary, engage in a process to shift them towards assumptions more supportive of a strong safety culture. (author)

  5. Bone density determination using I125 densitometry with idiopathic scoliosis

    International Nuclear Information System (INIS)

    Weinberger, N.

    1984-01-01

    Based on the assumption that radiographs from patients with idiopathic scoliosis show osteoporotic changes in the curved area, investigation with I 125 -densitometry were made, and specifically with measurement points at the ulna and the calcaneus. A difference in the bone density between patients with scoliosis and normal controls could not be proven. The mineral-salt content of the scoliosis patients lay on the average 6.5 to 9.3% lower than the normal controls. No relation could be found between the degree of curvature of the scoliosis and the peripheral bone density, from which it can be concluded that no generalized mineral-salt deficiency exists. Radiographs show only local changes (photo densitometry, computed tomography). (TRV) [de

  6. A note on asymptotic normality in the thermodynamic limit at low densities

    DEFF Research Database (Denmark)

    Jensen, J.L.

    1991-01-01

    We consider a continuous statistical mechanical system with a pair interaction in a region λ tending to infinity. For low densities asymptotic normality of the canonical statistic is proved, both in the grand canonical ensemble and in the canonical ensemble. The results are illustrated through...

  7. Features of the normal choriocapillaris with OCT-angiography: Density estimation and textural properties.

    Science.gov (United States)

    Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo

    2017-01-01

    The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.

  8. Brain parenchymal density measurements by CT in demented subjects and normal controls

    International Nuclear Information System (INIS)

    Gado, M.; Danziger, W.L.; Chi, D.; Hughes, C.P.; Coben, L.A.

    1983-01-01

    Parachymal density measurements of 14 regions of gray and white matter from each cerebral hemisphere were made from CT scans of 25 subjects who had varying degrees of dementia as measured by a global Clinical Dementia Rating, and also from CT scans of 33 normal control subjects. There were few significant differences between the two groups in the mean density value for each of the regions examined, although several individual psychometric tests did correlate with density changes. Moreover, for six regions in the cerebral cortex, and for one region in the thalamus of each hemisphere, we found no significant correlation between the gray-white matter density difference and dementia. There was, however, a loss of the discriminability between the gray and white matter with an increase in the size of the ventricles. These findings may be attributed to the loss of white matter volume

  9. Corneal endothelial cell density and morphology in normal Iranian eyes

    Directory of Open Access Journals (Sweden)

    Fallah Mohammad

    2006-03-01

    Full Text Available Abstract Background We describe corneal endothelial cell density and morphology in normal Iranian eyes and compare endothelial cell characteristics in the Iranian population with data available in the literature for American and Indian populations. Methods Specular microscopy was performed in 525 eyes of normal Iranian people aged 20 to 85 years old. The studied parameters including mean endothelial cell density (MCD, mean cell area (MCA and coefficient of variation (CV in cell area were analyzed in all of the 525 eyes. Results MCD was 1961 ± 457 cell/mm2 and MCA was 537.0 ± 137.4 μm2. There was no statistically significant difference in MCD, MCA and CV between genders (Student t-test, P = 0.85, P = 0.97 and P = 0.15 respectively. There was a statistically significant decrease in MCD with age (P r = -0.64. The rate of cell loss was 0.6% per year. There was also a statistically significant increase in MCA (P r = 0.56 and CV (P r = 0.30 from 20 to 85 years of age. Conclusion The first normative data for the endothelium of Iranian eyes seems to confirm that there are no differences in MCD, MCA and CV between genders. Nevertheless, the values obtained in Iranian eyes seem to be different to those reported by the literature in Indian and American populations.

  10. Study of electron densities of normal and neoplastic human breast tissues by Compton scattering using synchrotron radiation

    Energy Technology Data Exchange (ETDEWEB)

    Antoniassi, M.; Conceicao, A.L.C. [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto-Universidade de Sao Paulo, Ribeirao Preto, Sao Paulo (Brazil); Poletti, M.E., E-mail: poletti@ffclrp.usp.br [Departamento de Fisica-Faculdade de Filosofia Ciencias e Letras de Ribeirao Preto-Universidade de Sao Paulo, Ribeirao Preto, Sao Paulo (Brazil)

    2012-07-15

    Electron densities of 33 samples of normal (adipose and fibroglangular) and neoplastic (benign and malignant) human breast tissues were determined through Compton scattering data using a monochromatic synchrotron radiation source and an energy dispersive detector. The area of Compton peaks was used to determine the electron densities of the samples. Adipose tissue exhibits the lowest values of electron density whereas malignant tissue the highest. The relationship with their histology was discussed. Comparison with previous results showed differences smaller than 4%. - Highlights: Black-Right-Pointing-Pointer Electron density of normal and neoplastic breast tissues was measured using Compton scattering. Black-Right-Pointing-Pointer Monochromatic synchrotron radiation was used to obtain the Compton scattering data. Black-Right-Pointing-Pointer The area of Compton peaks was used to determine the electron densities of samples. Black-Right-Pointing-Pointer Adipose tissue shows the lowest electron density values whereas the malignant tissue the highest. Black-Right-Pointing-Pointer Comparison with previous results showed differences smaller than 4%.

  11. Study of electron densities of normal and neoplastic human breast tissues by Compton scattering using synchrotron radiation

    International Nuclear Information System (INIS)

    Antoniassi, M.; Conceição, A.L.C.; Poletti, M.E.

    2012-01-01

    Electron densities of 33 samples of normal (adipose and fibroglangular) and neoplastic (benign and malignant) human breast tissues were determined through Compton scattering data using a monochromatic synchrotron radiation source and an energy dispersive detector. The area of Compton peaks was used to determine the electron densities of the samples. Adipose tissue exhibits the lowest values of electron density whereas malignant tissue the highest. The relationship with their histology was discussed. Comparison with previous results showed differences smaller than 4%. - Highlights: ► Electron density of normal and neoplastic breast tissues was measured using Compton scattering. ► Monochromatic synchrotron radiation was used to obtain the Compton scattering data. ► The area of Compton peaks was used to determine the electron densities of samples. ► Adipose tissue shows the lowest electron density values whereas the malignant tissue the highest. ► Comparison with previous results showed differences smaller than 4%.

  12. Smooth quantile normalization.

    Science.gov (United States)

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  13. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  14. Non-Normality and Testing that a Correlation Equals Zero

    Science.gov (United States)

    Levy, Kenneth J.

    1977-01-01

    The importance of the assumption of normality for testing that a bivariate normal correlation equals zero is examined. Both empirical and theoretical evidence suggest that such tests are robust with respect to violation of the normality assumption. (Author/JKS)

  15. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  16. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Directory of Open Access Journals (Sweden)

    Anja F. Ernst

    2017-05-01

    Full Text Available Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  17. Differences in bone mineral density between normal-weight children and children with overweight and obesity: a systematic review and meta-analysis.

    Science.gov (United States)

    van Leeuwen, J; Koes, B W; Paulis, W D; van Middelkoop, M

    2017-05-01

    This study examines the differences in bone mineral density between normal-weight children and children with overweight or obesity. A systematic review and meta-analysis of observational studies (published up to 22 June 2016) on the differences in bone mineral density between normal-weight children and overweight and obese children was performed. Results were pooled when possible and mean differences (MDs) were calculated between normal-weight and overweight and normal-weight and obese children for bone content and density measures at different body sites. Twenty-seven studies, with a total of 5,958 children, were included. There was moderate and high quality of evidence that overweight (MD 213 g; 95% confidence interval [CI] 166, 261) and obese children (MD 329 g; 95%CI [229, 430]) have a significantly higher whole body bone mineral content than normal-weight children. Similar results were found for whole body bone mineral density. Sensitivity analysis showed that the association was stronger in girls. Overweight and obese children have a significantly higher bone mineral density compared with normal-weight children. Because there was only one study included with a longitudinal design, the long-term impact of childhood overweight and obesity on bone health at adulthood is not clear. © 2017 World Obesity Federation.

  18. Regions of low density in the contrast-enhanced pituitary gland: normal and pathologic processes

    International Nuclear Information System (INIS)

    Chambers, E.F.; Turski, P.A.; LaMasters, D.; Newton, T.H.

    1982-01-01

    The incidence of low-density regions in the contrast-enhanced pituitary gland and the possible causes of these regions were investigated by a retrospective review of computed tomographic (CT) scans of the head in 50 patients and autopsy specimens of the pituitary in 100 other patients. It was found that focal areas of low density within the contrast enhanced pituitary gland can be caused by various normal and pathologic conditions such as pituitary microadenomas, pars intermedia cysts, foci of metastasis, infarcts, epidermoid cysts, and abscesses. Although most focal low-density regions probably represent pituitary microadenomas, careful clinical correlation is needed to establish a diagnosis

  19. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization.

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-28

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  20. Communication: Density functional theory model for multi-reference systems based on the exact-exchange hole normalization

    Science.gov (United States)

    Laqua, Henryk; Kussmann, Jörg; Ochsenfeld, Christian

    2018-03-01

    The correct description of multi-reference electronic ground states within Kohn-Sham density functional theory (DFT) requires an ensemble-state representation, employing fractionally occupied orbitals. However, the use of fractional orbital occupation leads to non-normalized exact-exchange holes, resulting in large fractional-spin errors for conventional approximative density functionals. In this communication, we present a simple approach to directly include the exact-exchange-hole normalization into DFT. Compared to conventional functionals, our model strongly improves the description for multi-reference systems, while preserving the accuracy in the single-reference case. We analyze the performance of our proposed method at the example of spin-averaged atoms and spin-restricted bond dissociation energy surfaces.

  1. Symmetry energy, its density slope, and neutron-proton effective mass splitting at normal density extracted from global nucleon optical potentials

    International Nuclear Information System (INIS)

    Xu Chang; Li Baoan; Chen Liewen

    2010-01-01

    Based on the Hugenholtz-Van Hove theorem, it is shown that both the symmetry energy E sym (ρ) and its density slope L(ρ) at normal density ρ 0 are completely determined by the nucleon global optical potentials. The latter can be extracted directly from nucleon-nucleus scatterings, (p,n) charge-exchange reactions, and single-particle energy levels of bound states. Averaging all phenomenological isovector nucleon potentials constrained by world data available in the literature since 1969, the best estimates of E sym (ρ 0 )=31.3 MeV and L(ρ 0 )=52.7 MeV are simultaneously obtained. Moreover, the corresponding neutron-proton effective mass splitting in neutron-rich matter of isospin asymmetry δ is estimated to be (m n * -m p * )/m=0.32δ.

  2. Low-density lipoprotein concentration in the normal left coronary artery tree

    Directory of Open Access Journals (Sweden)

    Louridas George E

    2008-10-01

    Full Text Available Abstract Background The blood flow and transportation of molecules in the cardiovascular system plays a crucial role in the genesis and progression of atherosclerosis. This computational study elucidates the Low Density Lipoprotein (LDL site concentration in the entire normal human 3D tree of the LCA. Methods A 3D geometry model of the normal human LCA tree is constructed. Angiographic data used for geometry construction correspond to end-diastole. The resulted model includes the LMCA, LAD, LCxA and their main branches. The numerical simulation couples the flow equations with the transport equation applying realistic boundary conditions at the wall. Results High concentration of LDL values appears at bifurcation opposite to the flow dividers in the proximal regions of the Left Coronary Artery (LCA tree, where atherosclerosis frequently occurs. The area-averaged normalized luminal surface LDL concentrations over the entire LCA tree are, 1.0348, 1.054 and 1.23, for the low, median and high water infiltration velocities, respectively. For the high, median and low molecular diffusivities, the peak values of the normalized LDL luminal surface concentration at the LMCA bifurcation reach 1.065, 1.080 and 1.205, respectively. LCA tree walls are exposed to a cholesterolemic environment although the applied mass and flow conditions refer to normal human geometry and normal mass-flow conditions. Conclusion The relationship between WSS and luminal surface concentration of LDL indicates that LDL is elevated at locations where WSS is low. Concave sides of the LCA tree exhibit higher concentration of LDL than the convex sides. Decreased molecular diffusivity increases the LDL concentration. Increased water infiltration velocity increases the LDL concentration. The regional area of high luminal surface concentration is increased with increasing water infiltration velocity. Regions of high LDL luminal surface concentration do not necessarily co-locate to the

  3. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Adult Learning Assumptions

    Science.gov (United States)

    Baskas, Richard S.

    2011-01-01

    The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…

  5. Optimization of superconductor--normal-metal--superconductor Josephson junctions for high critical-current density

    International Nuclear Information System (INIS)

    Golub, A.; Horovitz, B.

    1994-01-01

    The application of superconducting Bi 2 Sr 2 CaCu 2 O 8 and YBa 2 Cu 3 O 7 wires or tapes to electronic devices requires the optimization of the transport properties in Ohmic contacts between the superconductor and the normal metal in the circuit. This paper presents results of tunneling theory in superconductor--normal-metal--superconductor (SNS) junctions, in both pure and dirty limits. We derive expressions for the critical-current density as a function of the normal-metal resistivity in the dirty limit or of the ratio of Fermi velocities and effective masses in the clean limit. In the latter case the critical current increases when the ratio γ of the Fermi velocity in the superconductor to that of the weak link becomes much less than 1 and it also has a local maximum if γ is close to 1. This local maximum is more pronounced if the ratio of effective masses is large. For temperatures well below the critical temperature of the superconductors the model with abrupt pair potential on the SN interfaces is considered and its applicability near the critical temperature is examined

  6. Do unreal assumptions pervert behaviour?

    DEFF Research Database (Denmark)

    Petersen, Verner C.

    of the basic assumptions underlying the theories found in economics. Assumptions relating to the primacy of self-interest, to resourceful, evaluative, maximising models of man, to incentive systems and to agency theory. The major part of the paper then discusses how these assumptions and theories may pervert......-interested way nothing will. The purpose of this paper is to take a critical look at some of the assumptions and theories found in economics and discuss their implications for the models and the practices found in the management of business. The expectation is that the unrealistic assumptions of economics have...... become taken for granted and tacitly included into theories and models of management. Guiding business and manage¬ment to behave in a fashion that apparently makes these assumptions become "true". Thus in fact making theories and models become self-fulfilling prophecies. The paper elucidates some...

  7. Transmission dynamics of Bacillus thuringiensis infecting Plodia interpunctella: a test of the mass action assumption with an insect pathogen.

    Science.gov (United States)

    Knell, R J; Begon, M; Thompson, D J

    1996-01-22

    Central to theoretical studies of host-pathogen population dynamics is a term describing transmission of the pathogen. This usually assumes that transmission is proportional to the density of infectious hosts or particles and of susceptible individuals. We tested this assumption with the bacterial pathogen Bacillus thuringiensis infecting larvae of Plodia interpunctella, the Indian meal moth. Transmission was found to increase in a more than linear way with host density in fourth and fifth instar P. interpunctella, and to decrease with the density of infectious cadavers in the case of fifth instar larvae. Food availability was shown to play an important part in this process. Therefore, on a number of counts, the usual assumption was found not to apply in our experimental system.

  8. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    Science.gov (United States)

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  10. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    Science.gov (United States)

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  11. Identification of Raman peaks of high-Tc cuprates in normal state through density of states

    International Nuclear Information System (INIS)

    Bishoyi, K.C.; Rout, G.C.; Behera, S.N.

    2007-01-01

    We present a microscopic theory to explain and identify the Raman spectral peaks of high-T c cuprates R 2-x M x CuO 4 in the normal state. We used electronic Hamiltonian prescribed by Fulde in presence of anti-ferromagnetism. Phonon interaction to the hybridization between the conduction electrons of the system and the f-electrons has been incorporated in the calculation. The phonon spectral density is calculated by the Green's function technique of Zubarev at zero wave vector and finite (room) temperature limit. The four Raman active peaks (P 1 -P 4 ) representing the electronic states of the atomic sub-systems of the cuprate system are identified by the calculated quasi-particle energy bands and electron density of states (DOS). The effect of interactions on these peaks are also explained

  12. Pre-equilibrium assumptions and statistical model parameters effects on reaction cross-section calculations

    International Nuclear Information System (INIS)

    Avrigeanu, M.; Avrigeanu, V.

    1992-02-01

    A systematic study on effects of statistical model parameters and semi-classical pre-equilibrium emission models has been carried out for the (n,p) reactions on the 56 Fe and 60 Co target nuclei. The results obtained by using various assumptions within a given pre-equilibrium emission model differ among them more than the ones of different models used under similar conditions. The necessity of using realistic level density formulas is emphasized especially in connection with pre-equilibrium emission models (i.e. with the exciton state density expression), while a basic support could be found only by replacement of the Williams exciton state density formula with a realistic one. (author). 46 refs, 12 figs, 3 tabs

  13. Low Bone Density

    Science.gov (United States)

    ... Density Exam/Testing › Low Bone Density Low Bone Density Low bone density is when your bone density ... people with normal bone density. Detecting Low Bone Density A bone density test will determine whether you ...

  14. Studies on Impingement Effects of Low Density Jets on Surfaces — Determination of Shear Stress and Normal Pressure

    Science.gov (United States)

    Sathian, Sarith. P.; Kurian, Job

    2005-05-01

    This paper presents the results of the Laser Reflection Method (LRM) for the determination of shear stress due to impingement of low-density free jets on flat plate. For thin oil film moving under the action of aerodynamic boundary layer the shear stress at the air-oil interface is equal to the shear stress between the surface and air. A direct and dynamic measurement of the oil film slope is measured using a position sensing detector (PSD). The thinning rate of oil film is directly measured which is the major advantage of the LRM over LISF method. From the oil film slope history, direct calculation of the shear stress is done using a three-point formula. For the full range of experiment conditions Knudsen numbers varied till the continuum limit of the transition regime. The shear stress values for low-density flows in the transition regime are thus obtained using LRM and the measured values of shear show fair agreement with those obtained by other methods. Results of the normal pressure measurements on a flat plate in low-density jets by using thermistors as pressure sensors are also presented in the paper. The normal pressure profiles obtained show the characteristic features of Newtonian impact theory for hypersonic flows.

  15. Score Normalization using Logistic Regression with Expected Parameters

    NARCIS (Netherlands)

    Aly, Robin

    State-of-the-art score normalization methods use generative models that rely on sometimes unrealistic assumptions. We propose a novel parameter estimation method for score normalization based on logistic regression. Experiments on the Gov2 and CluewebA collection indicate that our method is

  16. Epidermal growth factor receptor-induced activato protein 1 activity controls density-dependent growht inhibition in normal rat kidney fibroblasts.

    NARCIS (Netherlands)

    Hornberg, J.J.; Dekker, H.; Peters, P.H.J.; Langerak, P.; Westerhoff, H.V.; Lankelma, J.; Zoelen, E.J.J.

    2006-01-01

    Density-dependent growth inhibition secures tissue homeostasis. Dysfunction of the mechanisms, which regulate this type of growth control is a major cause of neoplasia. In confluent normal rat kidney (NRK) fibroblasts, epidermal growth factor (EGF) receptor levels decline, ultimately rendering these

  17. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...

  18. Density functionals from deep learning

    OpenAIRE

    McMahon, Jeffrey M.

    2016-01-01

    Density-functional theory is a formally exact description of a many-body quantum system in terms of its density; in practice, however, approximations to the universal density functional are required. In this work, a model based on deep learning is developed to approximate this functional. Deep learning allows computational models that are capable of naturally discovering intricate structure in large and/or high-dimensional data sets, with multiple levels of abstraction. As no assumptions are ...

  19. Notes on power of normality tests of error terms in regression models

    International Nuclear Information System (INIS)

    Střelec, Luboš

    2015-01-01

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models

  20. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  1. Deriving the coronal hole electron temperature: electron density dependent ionization / recombination considerations

    International Nuclear Information System (INIS)

    Doyle, John Gerard; Perez-Suarez, David; Singh, Avninda; Chapman, Steven; Bryans, Paul; Summers, Hugh; Savin, Daniel Wolf

    2010-01-01

    Comparison of appropriate theoretically derived line ratios with observational data can yield estimates of a plasma's physical parameters, such as electron density or temperature. The usual practice in the calculation of the line ratio is the assumption of excitation by electrons/protons followed by radiative decay. Furthermore, it is normal to use the so-called coronal approximation, i.e. one only considers ionization and recombination to and from the ground-state. A more accurate treatment is to include ionization/recombination to and from metastable levels. Here, we apply this to two lines from adjacent ionization stages, Mg IX 368 A and Mg X 625 A, which has been shown to be a very useful temperature diagnostic. At densities typical of coronal hole conditions, the difference between the electron temperature derived assuming the zero density limit compared with the electron density dependent ionization/recombination is small. This, however, is not the case for flares where the electron density is orders of magnitude larger. The derived temperature for the coronal hole at solar maximum is around 1.04 MK compared to just below 0.82 MK at solar minimum.

  2. Normal Parathyroid Function with Decreased Bone Mineral Density in Treated Celiac Disease

    Directory of Open Access Journals (Sweden)

    Bernard Lemieux

    2001-01-01

    Full Text Available Decreased bone mineral density (BMD has been reported in patients with celiac disease in association with secondary hyperparathyroidism. The present study investigated whether basal parathyroid hormone (PTH remained elevated and whether abnormalities of parathyroid function were still present in celiac disease patients treated with a gluten-free diet. Basal seric measurements of calcium and phosphate homeostasis and BMD were obtained in 17 biopsy-proven patients under treatment for a mean period of 5.7±3.7 years (range 1.1 to 15.9. In addition, parathyroid function was studied with calcium chloride and sodium citrate infusions in seven patients. Basal measurements of patients were compared with those of 26 normal individuals, while parathyroid function results were compared with those of seven sex- and age-matched controls. Basal results were similar in patients and controls except for intact PTH (I-PTH (3.77±0.88 pmol/L versus 2.28±0.63 pmol/L, P<0.001, which was higher in the former group but still within normal limits. Mean 25-hydroxy vitamin D and 1,25-dihydroxy vitamin D values were normal in patients. Parathyroid function results were also found to be similar in both groups. Compared with a reference population of the same age (Z score, patients had significantly lower BMDs of the hip (-0.60±0.96 SDs, P<0.05 and lumbar spine (-0.76±1.15 SDs, P<0.05. T scores were also decreased for the hip (-1.3±0.9 SDs, P<0.0001 and lumbar spine (-1.4±1.35 SDs, P<0.0001, with two to three patients being osteoporotic (T score less than -2.5 SDs and seven to eight osteopenic (T score less than -1 SDs but greater than or equal to -2.5 SDs in at least one site. Height and weight were the only important determinants of BMD values by multivariate or logistical regression analysis in these patients. The results show higher basal I-PTH values with normal parathyroid function in treated celiac disease. Height and weight values are, but I-PTH values are not

  3. Multiverse Assumptions and Philosophy

    Directory of Open Access Journals (Sweden)

    James R. Johnson

    2018-02-01

    Full Text Available Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong topics such as: infinity, duplicate yous, hypothetical fields, more than three space dimensions, Hilbert space, advanced civilizations, and reality established by mathematical relationships. It is easy to confuse multiverse proposals because many divergent models exist. This overview defines the characteristics of eleven popular multiverse proposals. The characteristics compared are: initial conditions, values of constants, laws of nature, number of space dimensions, number of universes, and fine tuning explanations. Future scientific experiments may validate selected assumptions; but until they do, proposals by philosophers may be as valid as theoretical scientific theories.

  4. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Directory of Open Access Journals (Sweden)

    Giordano James

    2010-01-01

    Full Text Available Abstract A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order, and what counts as abnormality (i.e.- disorder. The distinction(s between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice.

  5. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Science.gov (United States)

    2010-01-01

    A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176

  6. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  7. Adaptive nonlinear control using input normalized neural networks

    International Nuclear Information System (INIS)

    Leeghim, Henzeh; Seo, In Ho; Bang, Hyo Choong

    2008-01-01

    An adaptive feedback linearization technique combined with the neural network is addressed to control uncertain nonlinear systems. The neural network-based adaptive control theory has been widely studied. However, the stability analysis of the closed-loop system with the neural network is rather complicated and difficult to understand, and sometimes unnecessary assumptions are involved. As a result, unnecessary assumptions for stability analysis are avoided by using the neural network with input normalization technique. The ultimate boundedness of the tracking error is simply proved by the Lyapunov stability theory. A new simple update law as an adaptive nonlinear control is derived by the simplification of the input normalized neural network assuming the variation of the uncertain term is sufficiently small

  8. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    Directory of Open Access Journals (Sweden)

    Elżbieta Sandurska

    2016-12-01

    Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.

  9. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  10. Comment on 'Kinetic energy as a density functional'

    International Nuclear Information System (INIS)

    Holas, A.; March, N.H.

    2002-01-01

    In a recent paper, Nesbet [Phys. Rev. A 65, 010502(R) (2001)] has proposed dropping ''the widespread but unjustified assumption that the existence of a ground-state density functional for the kinetic energy, T s [ρ], of an N-electron system implies the existence of a density-functional derivative, δT s [ρ]/δρ(r), equivalent to a local potential function,'' because, according to his arguments, this derivative 'has the mathematical character of a linear operator that acts on orbital wave functions'. Our Comment demonstrates that the statement called by Nesbet an 'unjustified assumption' happens, in fact, to be a rigorously proven theorem. Therefore, his previous conclusions stemming from his different view of this derivative, which undermined the foundations of density-functional theory, can be discounted

  11. Density Development During Erosion Experiments of Cohesive Sediments

    DEFF Research Database (Denmark)

    Johansen, Claus; Larsen, Torben

    1998-01-01

    The density development during erosion experiments was investigated. The calculation of the erosion rate requires the knowledge of the density profile with respect to the consolidation time(Parchure, 1984). At present, the basic assumption in the calculations is that the density profile is achiev...... in order to obtail time invariant sediment properties during the experiments....

  12. Monitoring Assumptions in Assume-Guarantee Contracts

    Directory of Open Access Journals (Sweden)

    Oleg Sokolsky

    2016-05-01

    Full Text Available Pre-deployment verification of software components with respect to behavioral specifications in the assume-guarantee form does not, in general, guarantee absence of errors at run time. This is because assumptions about the environment cannot be discharged until the environment is fixed. An intuitive approach is to complement pre-deployment verification of guarantees, up to the assumptions, with post-deployment monitoring of environment behavior to check that the assumptions are satisfied at run time. Such a monitor is typically implemented by instrumenting the application code of the component. An additional challenge for the monitoring step is that environment behaviors are typically obtained through an I/O library, which may alter the component's view of the input format. This transformation requires us to introduce a second pre-deployment verification step to ensure that alarms raised by the monitor would indeed correspond to violations of the environment assumptions. In this paper, we describe an approach for constructing monitors and verifying them against the component assumption. We also discuss limitations of instrumentation-based monitoring and potential ways to overcome it.

  13. Tests of data quality, scaling assumptions, and reliability of the Danish SF-36

    DEFF Research Database (Denmark)

    Bjorner, J B; Damsgaard, M T; Watt, T

    1998-01-01

    We used general population data (n = 4084) to examine data completeness, response consistency, tests of scaling assumptions, and reliability of the Danish SF-36 Health Survey. We compared traditional multitrait scaling analyses to analyses using polychoric correlations and Spearman correlations...... with chronic diseases excepted). Concerning correlation methods, we found interesting differences indicating advantages of using methods that do not assume a normal distribution of answers as an addition to traditional methods....

  14. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other...... application domains of the theory. We argue that assumptional symmetry leads to theoretical advancement by promoting the development of theory with greater falsifiability and stronger ontological grounding. Thus, strategic management theory may be advanced by systematically searching for asymmetrical...

  15. Multiply gapped density of states in a normal metal in contact with a superconductor

    Energy Technology Data Exchange (ETDEWEB)

    Reutlinger, Johannes; Belzig, Wolfgang [Department of Physics, University of Konstanz, 78457 Konstanz (Germany); Nazarov, Yuli V. [Kavli Institute of Nanoscience Delft, Delft University of Technology, 2628 CJ Delft (Netherlands); Glazman, Leonid I. [Department of Physics, Yale University, New Haven CT 06511-8499 (United States)

    2012-07-01

    The spectral properties of a normal metal adjacent to a superconductor are strongly dependent on the characteristic mesoscopic energy scale - the Thouless energy E{sub Th} - and the strength of the connection. In this work, we predict that the local density of states (LDOS), besides the well know minigap {proportional_to}E{sub Th}, can exhibit a multiple gap structure, which strongly depends on the type of the contact. For ballistic contacts we calculate these secondary gaps analytically in the framework of quantum circuit theory of mesoscopic transport. The secondary gaps are absent in the case of tunnel contacts. In the general case the equations are solved numerically for more realistic contacts, like for example diffusive connectors or dirty interfaces, which are characterized by continuous distributions of transmission eigenvalues between 0 and 1. We find that the gap vanishes in these cases, but the density of states is still suppressed around the superconducting gap edge. Distribution functions with a stronger weight at higher transmissions can be modeled through asymmetric ballistic double junctions, which even exhibit multiple gaps. Such spectral signatures are fundamental to disordered nanoscopic conductors and experimentally accessible.

  16. Impurity transport model for the normal confinement and high density H-mode discharges in Wendelstein 7-AS

    International Nuclear Information System (INIS)

    Ida, K; Burhenn, R; McCormick, K; Pasch, E; Yamada, H; Yoshinuma, M; Inagaki, S; Murakami, S; Osakabe, M; Liang, Y; Brakel, R; Ehmler, H; Giannone, L; Grigull, P; Knauer, J P; Maassberg, H; Weller, A

    2003-01-01

    An impurity transport model based on diffusivity and the radial convective velocity is proposed as a first approach to explain the differences in the time evolution of Al XII (0.776 nm), Al XI (55 nm) and Al X (33.3 nm) lines following Al-injection by laser blow-off between normal confinement discharges and high density H-mode (HDH) discharges. Both discharge types are in the collisional regime for impurities (central electron temperature is 0.4 keV and central density exceeds 10 20 m -3 ). In this model, the radial convective velocity is assumed to be determined by the radial electric field, as derived from the pressure gradient. The diffusivity coefficient is chosen to be constant in the plasma core but is significantly larger in the edge region, where it counteracts the high local values of the inward convective velocity. Under these conditions, the faster decay of aluminium in HDH discharges can be explained by the smaller negative electric field in the bulk plasma, and correspondingly smaller inward convective velocity, due to flattening of the density profiles

  17. 75 FR 35098 - Federal Employees' Retirement System; Normal Cost Percentages

    Science.gov (United States)

    2010-06-21

    ... normal cost percentages and requests for actuarial assumptions and data to the Board of Actuaries, care of Gregory Kissel, Actuary, Office of Planning and Policy Analysis, Office of Personnel Management... Regulations, regulates how normal costs are determined. Recently, the Board of Actuaries of the Civil Service...

  18. Normal foot and ankle

    International Nuclear Information System (INIS)

    Weissman, S.D.

    1989-01-01

    The foot may be thought of as a bag of bones tied tightly together and functioning as a unit. The bones re expected to maintain their alignment without causing symptomatology to the patient. The author discusses a normal radiograph. The bones must have normal shape and normal alignment. The density of the soft tissues should be normal and there should be no fractures, tumors, or foreign bodies

  19. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to

  20. Three-dimensional structure of low-density nuclear matter

    International Nuclear Information System (INIS)

    Okamoto, Minoru; Maruyama, Toshiki; Yabana, Kazuhiro; Tatsumi, Toshitaka

    2012-01-01

    We numerically explore the pasta structures and properties of low-density nuclear matter without any assumption on the geometry. We observe conventional pasta structures, while a mixture of the pasta structures appears as a metastable state at some transient densities. We also discuss the lattice structure of droplets.

  1. Three-dimensional structure of low-density nuclear matter

    Energy Technology Data Exchange (ETDEWEB)

    Okamoto, Minoru, E-mail: okamoto@nucl.ph.tsukuba.ac.jp [Graduate School of Pure and Applied Science, University of Tsukuba, Tennoudai 1-1-1, Tsukuba, Ibaraki 305-8571 (Japan); Advanced Science Research Center, Japan Atomic Energy Agency, Shirakata Shirane 2-4, Tokai, Ibaraki 319-1195 (Japan); Maruyama, Toshiki, E-mail: maruyama.toshiki@jaea.go.jp [Advanced Science Research Center, Japan Atomic Energy Agency, Shirakata Shirane 2-4, Tokai, Ibaraki 319-1195 (Japan); Graduate School of Pure and Applied Science, University of Tsukuba, Tennoudai 1-1-1, Tsukuba, Ibaraki 305-8571 (Japan); Yabana, Kazuhiro, E-mail: yabana@nucl.ph.tsukuba.ac.jp [Graduate School of Pure and Applied Science, University of Tsukuba, Tennoudai 1-1-1, Tsukuba, Ibaraki 305-8571 (Japan); Center of Computational Sciences, University of Tsukuba, Tennoudai 1-1-1, Tsukuba, Ibaraki 305-8571 (Japan); Tatsumi, Toshitaka, E-mail: tatsumi@ruby.scphys.kyoto-u.ac.jp [Department of Physics, Kyoto University, Kyoto 606-8502 (Japan)

    2012-07-09

    We numerically explore the pasta structures and properties of low-density nuclear matter without any assumption on the geometry. We observe conventional pasta structures, while a mixture of the pasta structures appears as a metastable state at some transient densities. We also discuss the lattice structure of droplets.

  2. Identification of Raman peaks of high-T{sub c} cuprates in normal state through density of states

    Energy Technology Data Exchange (ETDEWEB)

    Bishoyi, K.C. [P.G. Department of Physics, F.M. College (Auto.), Balasore 756 001 (India)]. E-mail: bishoyi@iopb.res.in; Rout, G.C. [Condensed Matter Physics Group, Govt. Science College, Chatrapur 761 020, Orissa (India); Behera, S.N. [Physics Enclave, H.I.G.-23/1, Housing Board Phase-I, Chandrasekharpur, Bhubaneswar 7510016 (India)

    2007-05-31

    We present a microscopic theory to explain and identify the Raman spectral peaks of high-T{sub c} cuprates R{sub 2-x}M{sub x}CuO{sub 4} in the normal state. We used electronic Hamiltonian prescribed by Fulde in presence of anti-ferromagnetism. Phonon interaction to the hybridization between the conduction electrons of the system and the f-electrons has been incorporated in the calculation. The phonon spectral density is calculated by the Green's function technique of Zubarev at zero wave vector and finite (room) temperature limit. The four Raman active peaks (P{sub 1}-P{sub 4}) representing the electronic states of the atomic sub-systems of the cuprate system are identified by the calculated quasi-particle energy bands and electron density of states (DOS). The effect of interactions on these peaks are also explained.

  3. Single-particle energies and density of states in density functional theory

    Science.gov (United States)

    van Aggelen, H.; Chan, G. K.-L.

    2015-07-01

    Time-dependent density functional theory (TD-DFT) is commonly used as the foundation to obtain neutral excited states and transition weights in DFT, but does not allow direct access to density of states and single-particle energies, i.e. ionisation energies and electron affinities. Here we show that by extending TD-DFT to a superfluid formulation, which involves operators that break particle-number symmetry, we can obtain the density of states and single-particle energies from the poles of an appropriate superfluid response function. The standard Kohn- Sham eigenvalues emerge as the adiabatic limit of the superfluid response under the assumption that the exchange- correlation functional has no dependence on the superfluid density. The Kohn- Sham eigenvalues can thus be interpreted as approximations to the ionisation energies and electron affinities. Beyond this approximation, the formalism provides an incentive for creating a new class of density functionals specifically targeted at accurate single-particle eigenvalues and bandgaps.

  4. PFP issues/assumptions development and management planning guide

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The PFP Issues/Assumptions Development and Management Planning Guide presents the strategy and process used for the identification, allocation, and maintenance of an Issues/Assumptions Management List for the Plutonium Finishing Plant (PFP) integrated project baseline. Revisions to this document will include, as attachments, the most recent version of the Issues/Assumptions Management List, both open and current issues/assumptions (Appendix A), and closed or historical issues/assumptions (Appendix B). This document is intended be a Project-owned management tool. As such, this document will periodically require revisions resulting from improvements of the information, processes, and techniques as now described. Revisions that suggest improved processes will only require PFP management approval

  5. Group normalization for genomic data.

    Science.gov (United States)

    Ghandi, Mahmoud; Beer, Michael A

    2012-01-01

    Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN), to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  6. Semi-analytical quasi-normal mode theory for the local density of states in coupled photonic crystal cavity-waveguide structures

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2015-01-01

    We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained, ......-trivial spectrum with a peak and a dip is found, which is reproduced only when including both the two relevant QNMs in the theory. In both cases, we find relative errors below 1% in the bandwidth of interest.......We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained......, and for two types of two-dimensional PhCs, with one and two cavities side-coupled to an extended waveguide, the theory is validated against numerically exact computations. For the single cavity, a slightly asymmetric spectrum is found, which the QNM theory reproduces, and for two cavities a non...

  7. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  8. Normal lumbar spine bone mineral densities with single-energy CT

    International Nuclear Information System (INIS)

    Hendrick, R.E.; Ritenour, E.R.; Geis, J.R.; Thickman, D.; Freeman, K.

    1988-01-01

    The authors report trabecular spine densities determined by single-energy CT in 267 healthy women, aged 22 to 75 years. Volunteers were scanned at eight sites with use of identical fourth-generation CT scanners, postpatient calibration phantoms, and analysis software that accounts for beam hardening as a function of patient size. Results indicate that a cubic polynomial best represents the decrease in bone density (in milligrams per milliliter of K 2 HPO 4 ) with age (in years): Bone Density = 140.9 + 4.44(Age) - 0.133(Age) 2 + 0.0008(Age) 3 , with statistical significance over the best linear and quadratic polynomial fits (P < .001). The mean bone densities of healthy women above age 30 years are found to be lower by an average of 8 mg/mL than reported by Cann et al, whose data indicate that the greatest loss in trabecular bone density in healthy women occurs in the 50-59-year group, while out data indicate greatest loss in the 60-75 year age group

  9. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-05-23

    Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.

  10. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  11. Density variations in a reactor during liquid full dimerization

    NARCIS (Netherlands)

    Golombok, M.; Bruijn, J.

    2000-01-01

    In a liquid full plug flow reactor during lower olefin dimerization, the assumption of constant density is not valid—the volume of a plug changes as it proceeds along the reactor. The observed kinetics depend on the density variation in the reactor as the conversion proceeds towards a distribution

  12. Bone mineral density in postmenopausal Mexican-Mestizo women with normal body mass index, overweight, or obesity.

    Science.gov (United States)

    Méndez, Juan Pablo; Rojano-Mejía, David; Pedraza, Javier; Coral-Vázquez, Ramón Mauricio; Soriano, Ruth; García-García, Eduardo; Aguirre-García, María Del Carmen; Coronel, Agustín; Canto, Patricia

    2013-05-01

    Obesity and osteoporosis are two important public health problems that greatly impact mortality and morbidity. Several similarities between these complex diseases have been identified. The aim of this study was to analyze if different body mass indexes (BMIs) are associated with variations in bone mineral density (BMD) among postmenopausal Mexican-Mestizo women with normal weight, overweight, or different degrees of obesity. We studied 813 postmenopausal Mexican-Mestizo women. A structured questionnaire for risk factors was applied. Height and weight were used to calculate BMI, whereas BMD in the lumbar spine (LS) and total hip (TH) was measured by dual-energy x-ray absorptiometry. We used ANCOVA to examine the relationship between BMI and BMDs of the LS, TH, and femoral neck (FN), adjusting for confounding factors. Based on World Health Organization criteria, 15.13% of women had normal BMI, 39.11% were overweight, 25.96% had grade 1 obesity, 11.81% had grade 2 obesity, and 7.99% had grade 3 obesity. The higher the BMI, the higher was the BMD at the LS, TH, and FN. The greatest differences in size variations in BMD at these three sites were observed when comparing women with normal BMI versus women with grade 3 obesity. A higher BMI is associated significantly and positively with a higher BMD at the LS, TH, and FN.

  13. "Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2013-09-01

    Full Text Available Osborne and Waters (2002 focused on checking some of the assumptions of multiple linear.regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that.regression models estimated using ordinary least squares require the assumption of normally.distributed errors, but not the assumption of normally distributed response or predictor variables..They go on to discuss estimate bias and provide a helpful summary of the assumptions of multiple.regression when using ordinary least squares. While we were not as precise as we could have been.when discussing assumptions of normality, the critical issue of the 2002 paper remains -' researchers.often do not check on or report on the assumptions of their statistical methods. This response.expands on the points made by Williams, advocates a thorough examination of data prior to.reporting results, and provides an example of how incremental improvements in meeting the.assumption of normality of residuals incrementally improves the accuracy of confidence intervals.

  14. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    Science.gov (United States)

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  15. Group normalization for genomic data.

    Directory of Open Access Journals (Sweden)

    Mahmoud Ghandi

    Full Text Available Data normalization is a crucial preliminary step in analyzing genomic datasets. The goal of normalization is to remove global variation to make readings across different experiments comparable. In addition, most genomic loci have non-uniform sensitivity to any given assay because of variation in local sequence properties. In microarray experiments, this non-uniform sensitivity is due to different DNA hybridization and cross-hybridization efficiencies, known as the probe effect. In this paper we introduce a new scheme, called Group Normalization (GN, to remove both global and local biases in one integrated step, whereby we determine the normalized probe signal by finding a set of reference probes with similar responses. Compared to conventional normalization methods such as Quantile normalization and physically motivated probe effect models, our proposed method is general in the sense that it does not require the assumption that the underlying signal distribution be identical for the treatment and control, and is flexible enough to correct for nonlinear and higher order probe effects. The Group Normalization algorithm is computationally efficient and easy to implement. We also describe a variant of the Group Normalization algorithm, called Cross Normalization, which efficiently amplifies biologically relevant differences between any two genomic datasets.

  16. The relevance of ''theory rich'' bridge assumptions

    NARCIS (Netherlands)

    Lindenberg, S

    1996-01-01

    Actor models are increasingly being used as a form of theory building in sociology because they can better represent the caul mechanisms that connect macro variables. However, actor models need additional assumptions, especially so-called bridge assumptions, for filling in the relatively empty

  17. Density estimates of monarch butterflies overwintering in central Mexico

    Directory of Open Access Journals (Sweden)

    Wayne E. Thogmartin

    2017-04-01

    Full Text Available Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L. under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1; the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1. Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp. lost (0.86 billion stems in the northern US plus the amount of milkweed remaining (1.34 billion stems, we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  18. Density estimates of monarch butterflies overwintering in central Mexico

    Science.gov (United States)

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  19. Limit cycle analysis of nuclear coupled density wave oscillations

    International Nuclear Information System (INIS)

    Ward, M.E.

    1985-01-01

    An investigation of limit cycle behavior for the nuclear-coupled density wave oscillation (NCDWO) in a boiling water reactor (BWR) was performed. A simplified nonlinear model of BWR core behavior was developed using a two-region flow channel representation, coupled with a form of the point-kinetics equation. This model has been used to investigate the behavior of large amplitude NCDWO's through conventional time-integration solutions and through application of a direct relaxation-oscillation limit cycle solution in phase space. The numerical solutions demonstrate the potential for severe global power and flow oscillations in a BWR core at off-normal conditions, such as might occur during Anticipated Transients without Scram. Because of the many simplifying assumptions used, it is felt that the results should not be interpreted as an absolute prediction of core behavior, but as an indication of the potential for large oscillations and a demonstration of the corresponding limit cycle mechanisms. The oscillations in channel density drive the core power variations, and are reinforced by heat flux variations due to the changing fuel temperature. A global temperature increase occurs as energy is accumulated in the fuel, and limits the magnitude of the oscillations because as the average channel density decreases, the amplitude and duration of positive void reactivity at a given oscillation amplitude is lessened

  20. Advection-diffusion model for normal grain growth and the stagnation of normal grain growth in thin films

    International Nuclear Information System (INIS)

    Lou, C.

    2002-01-01

    An advection-diffusion model has been set up to describe normal grain growth. In this model grains are divided into different groups according to their topological classes (number of sides of a grain). Topological transformations are modelled by advective and diffusive flows governed by advective and diffusive coefficients respectively, which are assumed to be proportional to topological classes. The ordinary differential equations governing self-similar time-independent grain size distribution can be derived analytically from continuity equations. It is proved that the time-independent distributions obtained by solving the ordinary differential equations have the same form as the time-dependent distributions obtained by solving the continuity equations. The advection-diffusion model is extended to describe the stagnation of normal grain growth in thin films. Grain boundary grooving prevents grain boundaries from moving, and the correlation between neighbouring grains accelerates the stagnation of normal grain growth. After introducing grain boundary grooving and the correlation between neighbouring grains into the model, the grain size distribution is close to a lognormal distribution, which is usually found in experiments. A vertex computer simulation of normal grain growth has also been carried out to make a cross comparison with the advection-diffusion model. The result from the simulation did not verify the assumption that the advective and diffusive coefficients are proportional to topological classes. Instead, we have observed that topological transformations usually occur on certain topological classes. This suggests that the advection-diffusion model can be improved by making a more realistic assumption on topological transformations. (author)

  1. Assumptions and Policy Decisions for Vital Area Identification Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myungsu; Bae, Yeon-Kyoung; Lee, Youngseung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    U.S. Nuclear Regulatory Commission and IAEA guidance indicate that certain assumptions and policy questions should be addressed to a Vital Area Identification (VAI) process. Korea Hydro and Nuclear Power conducted a VAI based on current Design Basis Threat and engineering judgement to identify APR1400 vital areas. Some of the assumptions were inherited from Probabilistic Safety Assessment (PSA) as a sabotage logic model was based on PSA logic tree and equipment location data. This paper illustrates some important assumptions and policy decisions for APR1400 VAI analysis. Assumptions and policy decisions could be overlooked at the beginning stage of VAI, however they should be carefully reviewed and discussed among engineers, plant operators, and regulators. Through APR1400 VAI process, some of the policy concerns and assumptions for analysis were applied based on document research and expert panel discussions. It was also found that there are more assumptions to define for further studies for other types of nuclear power plants. One of the assumptions is mission time, which was inherited from PSA.

  2. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    Science.gov (United States)

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John

    2005-12-30

    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  3. Underlying assumptions and core beliefs in anorexia nervosa and dieting.

    Science.gov (United States)

    Cooper, M; Turner, H

    2000-06-01

    To investigate assumptions and beliefs in anorexia nervosa and dieting. The Eating Disorder Belief Questionnaire (EDBQ), was administered to patients with anorexia nervosa, dieters and female controls. The patients scored more highly than the other two groups on assumptions about weight and shape, assumptions about eating and negative self-beliefs. The dieters scored more highly than the female controls on assumptions about weight and shape. The cognitive content of anorexia nervosa (both assumptions and negative self-beliefs) differs from that found in dieting. Assumptions about weight and shape may also distinguish dieters from female controls.

  4. Are your covariates under control? How normalization can re-introduce covariate effects.

    Science.gov (United States)

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  5. Helicon normal modes in Proto-MPEX

    Science.gov (United States)

    Piotrowicz, P. A.; Caneses, J. F.; Green, D. L.; Goulding, R. H.; Lau, C.; Caughman, J. B. O.; Rapp, J.; Ruzic, D. N.

    2018-05-01

    The Proto-MPEX helicon source has been operating in a high electron density ‘helicon-mode’. Establishing plasma densities and magnetic field strengths under the antenna that allow for the formation of normal modes of the fast-wave are believed to be responsible for the ‘helicon-mode’. A 2D finite-element full-wave model of the helicon antenna on Proto-MPEX is used to identify the fast-wave normal modes responsible for the steady-state electron density profile produced by the source. We also show through the simulation that in the regions of operation in which core power deposition is maximum the slow-wave does not deposit significant power besides directly under the antenna. In the case of a simulation where a normal mode is not excited significant edge power is deposited in the mirror region. ).

  6. A Box-Cox normal model for response times

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box–Cox transformations for response

  7. Normality of raw data in general linear models: The most widespread myth in statistics

    Science.gov (United States)

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  8. Variation-preserving normalization unveils blind spots in gene expression profiling

    Science.gov (United States)

    Roca, Carlos P.; Gomes, Susana I. L.; Amorim, Mónica J. B.; Scott-Fordsmand, Janeck J.

    2017-01-01

    RNA-Seq and gene expression microarrays provide comprehensive profiles of gene activity, but lack of reproducibility has hindered their application. A key challenge in the data analysis is the normalization of gene expression levels, which is currently performed following the implicit assumption that most genes are not differentially expressed. Here, we present a mathematical approach to normalization that makes no assumption of this sort. We have found that variation in gene expression is much larger than currently believed, and that it can be measured with available assays. Our results also explain, at least partially, the reproducibility problems encountered in transcriptomics studies. We expect that this improvement in detection will help efforts to realize the full potential of gene expression profiling, especially in analyses of cellular processes involving complex modulations of gene expression. PMID:28276435

  9. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the ...

  10. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars. The lan......We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars....... The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...

  11. Quantiles for Finite Mixtures of Normal Distributions

    Science.gov (United States)

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  12. Achieving maximum baryon densities

    International Nuclear Information System (INIS)

    Gyulassy, M.

    1984-01-01

    In continuing work on nuclear stopping power in the energy range E/sub lab/ approx. 10 GeV/nucleon, calculations were made of the energy and baryon densities that could be achieved in uranium-uranium collisions. Results are shown. The energy density reached could exceed 2 GeV/fm 3 and baryon densities could reach as high as ten times normal nuclear densities

  13. Planar-channeling spatial density under statistical equilibrium

    International Nuclear Information System (INIS)

    Ellison, J.A.; Picraux, S.T.

    1978-01-01

    The phase-space density for planar channeled particles has been derived for the continuum model under statistical equilibrium. This is used to obtain the particle spatial probability density as a function of incident angle. The spatial density is shown to depend on only two parameters, a normalized incident angle and a normalized planar spacing. This normalization is used to obtain, by numerical calculation, a set of universal curves for the spatial density and also for the channeled-particle wavelength as a function of amplitude. Using these universal curves, the statistical-equilibrium spatial density and the channeled-particle wavelength can be easily obtained for any case for which the continuum model can be applied. Also, a new one-parameter analytic approximation to the spatial density is developed. This parabolic approximation is shown to give excellent agreement with the exact calculations

  14. Testing Mean Differences among Groups: Multivariate and Repeated Measures Analysis with Minimal Assumptions.

    Science.gov (United States)

    Bathke, Arne C; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne

    2018-03-22

    To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer's disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved.

  15. Photoionization and High Density Gas

    Science.gov (United States)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  16. Normal Bone Mineral Density Associates with Duodenal Mucosa Healing in Adult Patients with Celiac Disease on a Gluten-Free Diet.

    Science.gov (United States)

    Larussa, Tiziana; Suraci, Evelina; Imeneo, Maria; Marasco, Raffaella; Luzza, Francesco

    2017-01-31

    Impairment of bone mineral density (BMD) is frequent in celiac disease (CD) patients on a gluten-free diet (GFD). The normalization of intestinal mucosa is still difficult to predict. We aim to investigate the relationship between BMD and duodenal mucosa healing (DMH) in CD patients on a GFD. Sixty-four consecutive CD patients on a GFD were recruited. After a median period of a 6-year GFD (range 2-33 years), patients underwent repeat duodenal biopsy and dual-energy X-ray absorptiometry (DXA) scan. Twenty-four patients (38%) displayed normal and 40 (62%) low BMD, 47 (73%) DMH, and 17 (27%) duodenal mucosa lesions. All patients but one with normal BMD (23 of 24, 96%) showed DMH, while, among those with low BMD, 24 (60%) did and 16 (40%) did not. At multivariate analysis, being older (odds ratio (OR) 1.1, 95% confidence interval (CI) 1.03-1.18) and having diagnosis at an older age (OR 1.09, 95% CI 1.03-1.16) were associated with low BMD; in turn, having normal BMD was the only variable independently associated with DMH (OR 17.5, 95% CI 1.6-192). In older CD patients and with late onset disease, BMD recovery is not guaranteed, despite a GFD. A normal DXA scan identified CD patients with DMH; thus, it is a potential tool in planning endoscopic resampling.

  17. Occupancy estimation and the closure assumption

    Science.gov (United States)

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing

  18. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  19. Lung density

    DEFF Research Database (Denmark)

    Garnett, E S; Webber, C E; Coates, G

    1977-01-01

    The density of a defined volume of the human lung can be measured in vivo by a new noninvasive technique. A beam of gamma-rays is directed at the lung and, by measuring the scattered gamma-rays, lung density is calculated. The density in the lower lobe of the right lung in normal man during quiet...... breathing in the sitting position ranged from 0.25 to 0.37 g.cm-3. Subnormal values were found in patients with emphsema. In patients with pulmonary congestion and edema, lung density values ranged from 0.33 to 0.93 g.cm-3. The lung density measurement correlated well with the findings in chest radiographs...... but the lung density values were more sensitive indices. This was particularly evident in serial observations of individual patients....

  20. The Quantitative Measurements of Vascular Density and Flow Areas of Macula Using Optical Coherence Tomography Angiography in Normal Volunteers.

    Science.gov (United States)

    Ghassemi, Fariba; Fadakar, Kaveh; Bazvand, Fatemeh; Mirshahi, Reza; Mohebbi, Masoumeh; Sabour, Siamak

    2017-06-01

    The quantification of the density of macular vascular networks and blood flow areas in the foveal and parafoveal area in healthy subjects using optical coherence tomography angiography (OCTA). Cross-sectional, prospective study in an institutional setting at the Retina Services of Farabi Eye Hospital. One hundred twelve normal volunteers with no known ocular or systemic disease were included, including patient numbers (one or both eyes), selection procedures, inclusion/exclusion criteria, randomization procedure, and masking. En face angiogram OCTA was performed on a 3 mm × 3 mm region centered on the macula. Automated thresholding and measuring algorithm method for foveal and parafoveal blood flow and vascular density (VD) were used. The density of macular vascular networks and blood flow area in the foveal and parafoveal area were measured. A total of 224 healthy eyes from 112 subjects with a mean age of 36.4 years ± 11.3 years were included. In the foveal region, the VD of the superficial capillary network (sCN) was significantly higher than that of the deep capillary network (dCN) (31.1% ± 5.5% vs. 28.3% ± 7.2%; P < .001), whereas in the parafoveal area, VD was higher in the dCN (62.24% ± 2.8% vs. 56.5% ± 2.5%; P < .001). Flow area in the 1-mm radius circle in the sCN was less than in the dCN. Superficial foveal avascular zone (sFAZ) size was negatively correlated with the VD of the foveal sCN, but in the deep FAZ (dFAZ) was not correlated with VD or blood flow area of the fovea. There was no difference between measured VD and blood flow surface area in both eyes of the subjects. OCTA could be used as a noninvasive, repeatable, layer-free method in quantitative evaluation of VD and blood flow of macular area. The normal quantities of the vascular plexus density and flow will help in better understanding the pathophysiological basis of the vascular disease of retina. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:478-486.]. Copyright 2017, SLACK

  1. Low Density Lipoprotein and Non-Newtonian Oscillating Flow Biomechanical Parameters for Normal Human Aorta.

    Science.gov (United States)

    Soulis, Johannes V; Fytanidis, Dimitrios K; Lampri, Olga P; Giannoglou, George D

    2016-04-01

    The temporal variation of the hemodynamic mechanical parameters during cardiac pulse wave is considered as an important atherogenic factor. Applying non-Newtonian blood molecular viscosity simulation is crucial for hemodynamic analysis. Understanding low density lipoprotein (LDL) distribution in relation to flow parameters will possibly spot the prone to atherosclerosis aorta regions. The biomechanical parameters tested were averaged wall shear stress (AWSS), oscillatory shear index (OSI) and relative residence time (RRT) in relation to the LDL concentration. Four non-Newtonian molecular viscosity models and the Newtonian one were tested for the normal human aorta under oscillating flow. The analysis was performed via computational fluid dynamic. Tested viscosity blood flow models for the biomechanical parameters yield a consistent aorta pattern. High OSI and low AWSS develop at the concave aorta regions. This is most noticeable in downstream flow region of the left subclavian artery and at concave ascending aorta. Concave aorta regions exhibit high RRT and elevated LDL. For the concave aorta site, the peak LDL value is 35.0% higher than its entrance value. For the convex site, it is 18.0%. High LDL endothelium regions located at the aorta concave site are well predicted with high RRT. We are in favor of using the non-Newtonian power law model for analysis. It satisfactorily approximates the molecular viscosity, WSS, OSI, RRT and LDL distribution. Concave regions are mostly prone to atherosclerosis. The flow biomechanical factor RRT is a relatively useful tool for identifying the localization of the atheromatic plaques of the normal human aorta.

  2. Contextuality under weak assumptions

    International Nuclear Information System (INIS)

    Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D

    2017-01-01

    The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove

  3. Density equalizing map projections (cartograms) in public health applications

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, D.W.

    1998-05-01

    In studying geographic disease distributions, one normally compares rates among arbitrarily defined geographic subareas (e.g. census tracts), thereby sacrificing some of the geographic detail of the original data. The sparser the data, the larger the subareas must be in order to calculate stable rates. This dilemma is avoided with the technique of Density Equalizing Map Projections (DEMP){copyright}. Boundaries of geographic subregions are adjusted to equalize population density over the entire study area. Case locations plotted on the transformed map should have a uniform distribution if the underlying disease risk is constant. On the transformed map, the statistical analysis of the observed distribution is greatly simplified. Even for sparse distributions, the statistical significance of a supposed disease cluster can be calculated with validity. The DEMP algorithm was applied to a data set previously analyzed with conventional techniques; namely, 401 childhood cancer cases in four counties of California. The distribution of cases on the transformed map was analyzed visually and statistically. To check the validity of the method, the identical analysis was performed on 401 artificial cases randomly generated under the assumption of uniform risk. No statistically significant evidence for geographic non-uniformity of rates was found, in agreement with the original analysis performed by the California Department of Health Services.

  4. Inequality for the infinite-cluster density in Bernoulli percolation

    International Nuclear Information System (INIS)

    Chayes, J.T.; Chayes, L.

    1986-01-01

    Under a certain assumption (which is satisfied whenever there is a dense infinite cluster in the half-space), we prove a differential inequality for the infinite-cluster density, P/sub infinity/(p), in Bernoulli percolation. The principal implication of this result is that if P/sub infinity/(p) vanishes with critical exponent β, then β obeys the mean-field bound β< or =1. As a corollary, we also derive an inequality relating the backbone density, the truncated susceptibility, and the infinite-cluster density

  5. Size-density metrics, leaf area, and productivity in eastern white pine

    Science.gov (United States)

    J. C. Innes; M. J. Ducey; J. H. Gove; W. B. Leak; J. P. Barrett

    2005-01-01

    Size-density metrics are used extensively for silvicultural planning; however, they operate on biological assumptions that remain relatively untested. Using data from 12 even-aged stands of eastern white pine (Pinus strobus L.) growing in southern New Hampshire, we compared size-density metrics with stand productivity and its biological components,...

  6. 40 CFR 265.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ..., STORAGE, AND DISPOSAL FACILITIES Financial Requirements § 265.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  7. 40 CFR 144.66 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State assumption of responsibility...

  8. 40 CFR 264.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... FACILITIES Financial Requirements § 264.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure, post-closure care, or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  9. 40 CFR 261.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... Excluded Hazardous Secondary Materials § 261.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure or liability... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  10. 40 CFR 267.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... STANDARDIZED PERMIT Financial Requirements § 267.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure care or liability... 40 Protection of Environment 26 2010-07-01 2010-07-01 false State assumption of responsibility...

  11. Normal Bone Mineral Density Associates with Duodenal Mucosa Healing in Adult Patients with Celiac Disease on a Gluten-Free Diet

    Directory of Open Access Journals (Sweden)

    Tiziana Larussa

    2017-01-01

    Full Text Available Impairment of bone mineral density (BMD is frequent in celiac disease (CD patients on a gluten-free diet (GFD. The normalization of intestinal mucosa is still difficult to predict. We aim to investigate the relationship between BMD and duodenal mucosa healing (DMH in CD patients on a GFD. Sixty-four consecutive CD patients on a GFD were recruited. After a median period of a 6-year GFD (range 2–33 years, patients underwent repeat duodenal biopsy and dual-energy X-ray absorptiometry (DXA scan. Twenty-four patients (38% displayed normal and 40 (62% low BMD, 47 (73% DMH, and 17 (27% duodenal mucosa lesions. All patients but one with normal BMD (23 of 24, 96% showed DMH, while, among those with low BMD, 24 (60% did and 16 (40% did not. At multivariate analysis, being older (odds ratio (OR 1.1, 95% confidence interval (CI 1.03–1.18 and having diagnosis at an older age (OR 1.09, 95% CI 1.03–1.16 were associated with low BMD; in turn, having normal BMD was the only variable independently associated with DMH (OR 17.5, 95% CI 1.6–192. In older CD patients and with late onset disease, BMD recovery is not guaranteed, despite a GFD. A normal DXA scan identified CD patients with DMH; thus, it is a potential tool in planning endoscopic resampling.

  12. A novel image toggle tool for comparison of serial mammograms: automatic density normalization and alignment-development of the tool and initial experience.

    Science.gov (United States)

    Honda, Satoshi; Tsunoda, Hiroko; Fukuda, Wataru; Saida, Yukihisa

    2014-12-01

    The purpose is to develop a new image toggle tool with automatic density normalization (ADN) and automatic alignment (AA) for comparing serial digital mammograms (DMGs). We developed an ADN and AA process to compare the images of serial DMGs. In image density normalization, a linear interpolation was applied by taking two points of high- and low-brightness areas. The alignment was calculated by determining the point of the greatest correlation while shifting the alignment between the current and prior images. These processes were performed on a PC with a 3.20-GHz Xeon processor and 8 GB of main memory. We selected 12 suspected breast cancer patients who had undergone screening DMGs in the past. Automatic processing was retrospectively performed on these images. Two radiologists subjectively evaluated them. The process of the developed algorithm took approximately 1 s per image. In our preliminary experience, two images could not be aligned approximately. When they were aligned, image toggling allowed detection of differences between examinations easily. We developed a new tool to facilitate comparative reading of DMGs on a mammography viewing system. Using this tool for toggling comparisons might improve the interpretation efficiency of serial DMGs.

  13. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  14. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  15. Subchondral bone density distribution of the talus in clinically normal Labrador Retrievers.

    Science.gov (United States)

    Dingemanse, W; Müller-Gerbl, M; Jonkers, I; Vander Sloten, J; van Bree, H; Gielen, I

    2016-03-15

    Bones continually adapt their morphology to their load bearing function. At the level of the subchondral bone, the density distribution is highly correlated with the loading distribution of the joint. Therefore, subchondral bone density distribution can be used to study joint biomechanics non-invasively. In addition physiological and pathological joint loading is an important aspect of orthopaedic disease, and research focusing on joint biomechanics will benefit veterinary orthopaedics. This study was conducted to evaluate density distribution in the subchondral bone of the canine talus, as a parameter reflecting the long-term joint loading in the tarsocrural joint. Two main density maxima were found, one proximally on the medial trochlear ridge and one distally on the lateral trochlear ridge. All joints showed very similar density distribution patterns and no significant differences were found in the localisation of the density maxima between left and right limbs and between dogs. Based on the density distribution the lateral trochlear ridge is most likely subjected to highest loads within the tarsocrural joint. The joint loading distribution is very similar between dogs of the same breed. In addition, the joint loading distribution supports previous suggestions of the important role of biomechanics in the development of OC lesions in the tarsus. Important benefits of computed tomographic osteoabsorptiometry (CTOAM), i.e. the possibility of in vivo imaging and temporal evaluation, make this technique a valuable addition to the field of veterinary orthopaedic research.

  16. On the Validity of the “Thin” and “Thick” Double-Layer Assumptions When Calculating Streaming Currents in Porous Media

    Directory of Open Access Journals (Sweden)

    Matthew D. Jackson

    2012-01-01

    Full Text Available We find that the thin double layer assumption, in which the thickness of the electrical diffuse layer is assumed small compared to the radius of curvature of a pore or throat, is valid in a capillary tubes model so long as the capillary radius is >200 times the double layer thickness, while the thick double layer assumption, in which the diffuse layer is assumed to extend across the entire pore or throat, is valid so long as the capillary radius is >6 times smaller than the double layer thickness. At low surface charge density (0.5 M the validity criteria are less stringent. Our results suggest that the thin double layer assumption is valid in sandstones at low specific surface charge (<10 mC⋅m−2, but may not be valid in sandstones of moderate- to small pore-throat size at higher surface charge if the brine concentration is low (<0.001 M. The thick double layer assumption is likely to be valid in mudstones at low brine concentration (<0.1 M and surface charge (<10 mC⋅m−2, but at higher surface charge, it is likely to be valid only at low brine concentration (<0.003 M. Consequently, neither assumption may be valid in mudstones saturated with natural brines.

  17. Formalization and Analysis of Reasoning by Assumption

    OpenAIRE

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been speci...

  18. Normal modes of weak colloidal gels

    Science.gov (United States)

    Varga, Zsigmond; Swan, James W.

    2018-01-01

    The normal modes and relaxation rates of weak colloidal gels are investigated in calculations using different models of the hydrodynamic interactions between suspended particles. The relaxation spectrum is computed for freely draining, Rotne-Prager-Yamakawa, and accelerated Stokesian dynamics approximations of the hydrodynamic mobility in a normal mode analysis of a harmonic network representing several colloidal gels. We find that the density of states and spatial structure of the normal modes are fundamentally altered by long-ranged hydrodynamic coupling among the particles. Short-ranged coupling due to hydrodynamic lubrication affects only the relaxation rates of short-wavelength modes. Hydrodynamic models accounting for long-ranged coupling exhibit a microscopic relaxation rate for each normal mode, λ that scales as l-2, where l is the spatial correlation length of the normal mode. For the freely draining approximation, which neglects long-ranged coupling, the microscopic relaxation rate scales as l-γ, where γ varies between three and two with increasing particle volume fraction. A simple phenomenological model of the internal elastic response to normal mode fluctuations is developed, which shows that long-ranged hydrodynamic interactions play a central role in the viscoelasticity of the gel network. Dynamic simulations of hard spheres that gel in response to short-ranged depletion attractions are used to test the applicability of the density of states predictions. For particle concentrations up to 30% by volume, the power law decay of the relaxation modulus in simulations accounting for long-ranged hydrodynamic interactions agrees with predictions generated by the density of states of the corresponding harmonic networks as well as experimental measurements. For higher volume fractions, excluded volume interactions dominate the stress response, and the prediction from the harmonic network density of states fails. Analogous to the Zimm model in polymer

  19. DDH-Like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2012-01-01

    We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of $\\mathbb{F}_{q}$ “in the exponent” of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring $R_f=......-Reingold style pseudorandom functions, and auxiliary input secure encryption. This can be seen as an alternative to the known family of k-LIN assumptions....

  20. The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.

    Science.gov (United States)

    Meindl, Peter; Johnson, Kate M; Graham, Jesse

    2016-04-01

    Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. © 2016 by the Society for Personality and Social Psychology, Inc.

  1. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    Science.gov (United States)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  2. Formalization and analysis of reasoning by assumption.

    Science.gov (United States)

    Bosse, Tibor; Jonker, Catholijn M; Treur, Jan

    2006-01-02

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been specified, some of which are considered characteristic for the reasoning pattern, whereas some other properties can be used to discriminate among different approaches to the reasoning. These properties have been automatically checked for the traces acquired in experiments undertaken. The approach turned out to be beneficial from two perspectives. First, checking characteristic properties contributes to the empirical validation of a theory on reasoning by assumption. Second, checking discriminating properties allows the analyst to identify different classes of human reasoners. 2006 Lawrence Erlbaum Associates, Inc.

  3. Meson phase space density from interferometry

    International Nuclear Information System (INIS)

    Bertsch, G.F.

    1993-01-01

    The interferometric analysis of meson correlations a measure of the average phase space density of the mesons in the final state. The quantity is a useful indicator of the statistical properties of the systems, and it can be extracted with a minimum of model assumptions. Values obtained from recent measurements are consistent with the thermal value, but do not rule out superradiance effects

  4. Concrete density estimation by rebound hammer method

    International Nuclear Information System (INIS)

    Ismail, Mohamad Pauzi bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin

    2016-01-01

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite

  5. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  6. The stable model semantics under the any-world assumption

    OpenAIRE

    Straccia, Umberto; Loyer, Yann

    2004-01-01

    The stable model semantics has become a dominating approach to complete the knowledge provided by a logic program by means of the Closed World Assumption (CWA). The CWA asserts that any atom whose truth-value cannot be inferred from the facts and rules is supposed to be false. This assumption is orthogonal to the so-called the Open World Assumption (OWA), which asserts that every such atom's truth is supposed to be unknown. The topic of this paper is to be more fine-grained. Indeed, the objec...

  7. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  8. Energy vs. density on paths toward more exact density functionals.

    Science.gov (United States)

    Kepp, Kasper P

    2018-03-14

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a "path". Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness and "straying" from the "path" by separating errors in ρ and E[ρ]. A consistent path toward exactness involves minimizing both errors. Second, a suitably diverse test set of trial densities ρ' can be used to estimate the significance of errors in ρ without knowing the exact densities which are often inaccessible. To illustrate this, the systems previously studied by Medvedev et al., the first ionization energies of atoms with Z = 1 to 10, the ionization energy of water, and the bond dissociation energies of five diatomic molecules were investigated using CCSD(T)/aug-cc-pV5Z as benchmark at chemical accuracy. Four functionals of distinct designs was used: B3LYP, PBE, M06, and S-VWN. For atomic cations regardless of charge and compactness up to Z = 10, the energy effects of the different ρ are energy-wise insignificant. An interesting oscillating behavior in the density sensitivity is observed vs. Z, explained by orbital occupation effects. Finally, it is shown that even large "normal" problems such as the Co-C bond energy of cobalamins can use simpler (e.g. PBE) trial densities to drastically speed up computation by loss of a few kJ mol -1 in accuracy. The proposed method of using a test set of trial densities to estimate the sensitivity and significance of density errors of functionals may be useful for testing and designing new balanced functionals with more systematic improvement of densities and energies.

  9. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  10. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  11. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  12. Density functional study of a typical thiol tethered on a gold surface: ruptures under normal or parallel stretch

    International Nuclear Information System (INIS)

    Wang, Guan M; Sandberg, William C; Kenny, Steven D

    2006-01-01

    The mechanical and dynamical properties of a model Au(111)/thiol surface system were investigated by using localized atomic-type orbital density functional theory in the local density approximation. Relaxing the system gives a configuration where the sulfur atom forms covalent bonds to two adjacent gold atoms as the lowest energy structure. Investigations based on ab initio molecular dynamics simulations at 300, 350 and 370 K show that this tethering system is stable. The rupture behaviour between the thiol and the surface was studied by displacing the free end of the thiol. Calculated energy profiles show a process of multiple successive ruptures that account for experimental observations. The process features successive ruptures of the two Au-S bonds followed by the extraction of one S-bonded Au atom from the surface. The force required to rupture the thiol from the surface was found to be dependent on the direction in which the thiol was displaced, with values comparable with AFM measurements. These results aid the understanding of failure dynamics of Au(111)-thiol-tethered biosurfaces in microfluidic devices where fluidic shear and normal forces are of concern

  13. Parity dependence of the nuclear level density at high excitation

    International Nuclear Information System (INIS)

    Rao, B.V.; Agrawal, H.M.

    1995-01-01

    The basic underlying assumption ρ(l+1, J)=ρ(l, J) in the level density function ρ(U, J, π) has been checked on the basis of high quality data available on individual resonance parameters (E 0 , Γ n , J π ) for s- and p-wave neutrons in contrast to the earlier analysis where information about p-wave resonance parameters was meagre. The missing level estimator based on the partial integration over a Porter-Thomas distribution of neutron reduced widths and the Dyson-Mehta Δ 3 statistic for the level spacing have been used to ascertain that the s- and p-wave resonance level spacings D(0) and D(1) are not in error because of spurious and missing levels. The present work does not validate the tacit assumption ρ(l+1, J)=ρ(l, J) and confirms that the level density depends upon parity at high excitation. The possible implications of the parity dependence of the level density on the results of statistical model calculations of nuclear reaction cross sections as well as on pre-compound emission have been emphasized. (orig.)

  14. Improving the scaling normalization for high-density oligonucleotide GeneChip expression microarrays

    Directory of Open Access Journals (Sweden)

    Lu Chao

    2004-07-01

    Full Text Available Abstract Background Normalization is an important step for microarray data analysis to minimize biological and technical variations. Choosing a suitable approach can be critical. The default method in GeneChip expression microarray uses a constant factor, the scaling factor (SF, for every gene on an array. The SF is obtained from a trimmed average signal of the array after excluding the 2% of the probe sets with the highest and the lowest values. Results Among the 76 U34A GeneChip experiments, the total signals on each array showed 25.8% variations in terms of the coefficient of variation, although all microarrays were hybridized with the same amount of biotin-labeled cRNA. The 2% of the probe sets with the highest signals that were normally excluded from SF calculation accounted for 34% to 54% of the total signals (40.7% ± 4.4%, mean ± sd. In comparison with normalization factors obtained from the median signal or from the mean of the log transformed signal, SF showed the greatest variation. The normalization factors obtained from log transformed signals showed least variation. Conclusions Eliminating 40% of the signal data during SF calculation failed to show any benefit. Normalization factors obtained with log transformed signals performed the best. Thus, it is suggested to use the mean of the logarithm transformed data for normalization, rather than the arithmetic mean of signals in GeneChip gene expression microarrays.

  15. Idaho National Engineering Laboratory installation roadmap assumptions document

    International Nuclear Information System (INIS)

    1993-05-01

    This document is a composite of roadmap assumptions developed for the Idaho National Engineering Laboratory (INEL) by the US Department of Energy Idaho Field Office and subcontractor personnel as a key element in the implementation of the Roadmap Methodology for the INEL Site. The development and identification of these assumptions in an important factor in planning basis development and establishes the planning baseline for all subsequent roadmap analysis at the INEL

  16. Age-predicted values for lumbar spine, proximal femur, and whole-body bone mineral density: results from a population of normal children aged 3 to 18 years

    Energy Technology Data Exchange (ETDEWEB)

    Webber, C.E. [Hamilton Health Sciences, Dept. of Nuclear Medicine, Hamilton, Ontario (Canada); McMaster Univ., Dept. of Radiology, Hamilton, Ontario (Canada)]. E-mail: webber@hhsc.ca; Beaumont, L.F. [Hamilton Health Sciences, Dept. of Nuclear Medicine, Hamilton, Ontario (Canada); Morrison, J. [McMaster Children' s Hospital, Hamilton, Ontario (Canada); Sala, A. [McMaster Children' s Hospital, Hamilton, Ontario (Canada); McMaster Univ., Dept. of Pediatrics, Hamilton, Ontario (Canada); Univ. of Milan-Bicocca, Monza (Italy); Barr, R.D. [McMaster Children' s Hospital, Hamilton, Ontario (Canada); McMaster Univ., Dept. of Pediatrics, Hamilton, Ontario (Canada)

    2007-02-15

    We measured areal bone mineral density (BMD) with dual-energy X-ray absorptiometry (DXA) at the lumbar spine and the proximal femur and for the total body in 179 subjects (91 girls and 88 boys) with no known disorders that might affect calcium metabolism. Results are also reported for lumbar spine bone mineral content (BMC) and for the derived variable, bone mineral apparent density (BMAD). Expected-for-age values for each variable were derived for boys and girls by using an expression that represented the sum of a steady increase due to growth plus a rapid increase associated with puberty. Normal ranges were derived by assuming that at least 95% of children would be included within 1.96 population standard deviations (SD) of the expected-for-age value. The normal range for lumbar spine BMD derived from our population of children was compared with previously published normal ranges based on results obtained from different bone densitometers in diverse geographic locations. The extent of agreement between the various normal ranges indicates that the derived expressions can be used for reporting routine spine, femur, and whole-body BMD measurements in children and adolescents. The greatest difference in expected-for-age values among the various studies was that arising from intermanufacturer variability. The application of published conversion factors derived from DXA measurements in adults did not account fully for these differences, especially in younger children. (author)

  17. Age-predicted values for lumbar spine, proximal femur, and whole-body bone mineral density: results from a population of normal children aged 3 to 18 years

    International Nuclear Information System (INIS)

    Webber, C.E.; Beaumont, L.F.; Morrison, J.; Sala, A.; Barr, R.D.

    2007-01-01

    We measured areal bone mineral density (BMD) with dual-energy X-ray absorptiometry (DXA) at the lumbar spine and the proximal femur and for the total body in 179 subjects (91 girls and 88 boys) with no known disorders that might affect calcium metabolism. Results are also reported for lumbar spine bone mineral content (BMC) and for the derived variable, bone mineral apparent density (BMAD). Expected-for-age values for each variable were derived for boys and girls by using an expression that represented the sum of a steady increase due to growth plus a rapid increase associated with puberty. Normal ranges were derived by assuming that at least 95% of children would be included within 1.96 population standard deviations (SD) of the expected-for-age value. The normal range for lumbar spine BMD derived from our population of children was compared with previously published normal ranges based on results obtained from different bone densitometers in diverse geographic locations. The extent of agreement between the various normal ranges indicates that the derived expressions can be used for reporting routine spine, femur, and whole-body BMD measurements in children and adolescents. The greatest difference in expected-for-age values among the various studies was that arising from intermanufacturer variability. The application of published conversion factors derived from DXA measurements in adults did not account fully for these differences, especially in younger children. (author)

  18. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  19. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    Science.gov (United States)

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  20. Major Assumptions of Mastery Learning.

    Science.gov (United States)

    Anderson, Lorin W.

    Mastery learning can be described as a set of group-based, individualized, teaching and learning strategies based on the premise that virtually all students can and will, in time, learn what the school has to teach. Inherent in this description are assumptions concerning the nature of schools, classroom instruction, and learners. According to the…

  1. Normal-mode Magnetoseismology as a Virtual Instrument for the Plasma Mass Density in the Inner Magneotsphere: MMS Observations during Magnetic Storms

    Science.gov (United States)

    Chi, P. J.; Takahashi, K.; Denton, R. E.

    2017-12-01

    Previous studies have demonstrated that the electric and magnetic field measurements on closed field lines can detect harmonic frequencies of field line resonance (FLR) and infer the plasma mass density distribution in the inner magnetosphere. This normal-mode magnetoseismology technique can act as a virtual instrument for spacecraft with a magnetometer and/or an electric field instrument, and it can convert the electromagnetic measurements to knowledge about the plasma mass, of which the dominant low-energy core is difficult to detect directly due to the spacecraft potential. The additional measurement of the upper hybrid frequency by the plasma wave instrument can well constrain the oxygen content in the plasma. In this study, we use field line resonance (FLR) frequencies observed by the Magnetospheric Multiscale (MMS) satellites to estimate the plasma mass density during magnetic storms. At FLR frequencies, the phase difference between the azimuthal magnetic perturbation and the radial electric perturbation is approximately ±90°, which is consistent with the characteristic of standing waves. During the magnetic storm in October 2015, the FLR observations indicate a clear enhancement in the plasma mass density on the first day of the recovery phase, but the added plasma was quickly removed on the following day. We will compare with the FLR observations by other operating satellites such as the Van Allen Probes and GOES to examine the spatial variations of the plasma mass density in the magnetosphere. Also discussed are how the spacing in harmonic frequencies can infer the distribution of plasma mass density along the field line as well as its implications.

  2. Empirical Power Comparison Of Goodness of Fit Tests for Normality In The Presence of Outliers

    International Nuclear Information System (INIS)

    Saculinggan, Mayette; Balase, Emily Amor

    2013-01-01

    Most statistical tests such as t-tests, linear regression analysis and Analysis of Variance (ANOVA) require the normality assumptions. When the normality assumption is violated, interpretation and inferences may not be reliable. Therefore it is important to assess such assumption before using any appropriate statistical test. One of the commonly used procedures in determining whether a random sample of size n comes from a normal population are the goodness-of-fit tests for normality. Several studies have already been conducted on the comparison of the different goodness-of-fit(see, for example [2]) but it is generally limited to the sample size or to the number of GOF tests being compared(see, for example [2] [5] [6] [7] [8]). This paper compares the power of six formal tests of normality: Kolmogorov-Smirnov test (see [3]), Anderson-Darling test, Shapiro-Wilk test, Lilliefors test, Chi-Square test (see [1]) and D'Agostino-Pearson test. Small, moderate and large sample sizes and various contamination levels were used to obtain the power of each test via Monte Carlo simulation. Ten thousand samples of each sample size and contamination level at a fixed type I error rate α were generated from the given alternative distribution. The power of each test was then obtained by comparing the normality test statistics with the respective critical values. Results show that the power of all six tests is low for small sample size(see, for example [2]). But for n = 20, the Shapiro-Wilk test and Anderson – Darling test have achieved high power. For n = 60, Shapiro-Wilk test and Liliefors test are most powerful. For large sample size, Shapiro-Wilk test is most powerful (see, for example [5]). However, the test that achieves the highest power under all conditions for large sample size is D'Agostino-Pearson test (see, for example [9]).

  3. Intercorrelations among plasma high density lipoprotein, obesity and triglycerides in a normal population.

    Science.gov (United States)

    Albrink, M J; Krauss, R M; Lindgrem, F T; von der Groeben, J; Pan, S; Wood, P D

    1980-09-01

    The interrelationships among fatness measures, plasma triglycerides and high density lipoproteins (HDL) were examined in 131 normal adult subjects: 38 men aged 27-46, 40 men aged 47-66, 29 women aged 27-46 and 24 women aged 47-66. None of the women were taking estrogens or oral contraceptive medication. The HDL concentration was subdivided into HDL2b, HDL2a and HDL3 by a computerized fitting of the total schlieren pattern to reference schlieren patterns. Anthropometric measures employed included skinfolds at 3 sites. 2 weight/height indices and 2 girth measurements. A high correlation was found among the various fatness measures. These measures were negatively correlated with total HDL, reflecting the negative correlation between fatness measures and HDL2 (as the sum of HDL2a and 2b). Fatness measures showed no relationship to HDL3. There was also an inverse correlation between triglyceride concentration and HDL2. No particular fatness measure was better than any other for demonstrating the inverse correlation with HDL but multiple correlations using all of the measures of obesity improved the correlations. Partial correlations controlling for fatness did not reduce any of the significant correlations between triglycerides and HDL2 to insignificance. The weak correlation between fatness and triglycerides was reduced to insignificance when controlled for HDL2.

  4. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1) The...

  5. Estimating the Heading Direction Using Normal Flow

    Science.gov (United States)

    1994-01-01

    understood (Faugeras and Maybank 1990), 3 Kinetic Stabilization under the assumption that optic flow or correspon- dence is known with some uncertainty...accelerometers can achieve very It can easily be shown (Koenderink and van Doom high accuracy, the same is not true for inexpensive 1975; Maybank 1985... Maybank . ’Motion from point matches: Multi- just don’t compute normal flow there (see Section 6). plicity of solutions". Int’l J. Computer Vision 4

  6. WE-AB-207B-05: Correlation of Normal Lung Density Changes with Dose After Stereotactic Body Radiotherapy (SBRT) for Early Stage Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Q; Devpura, S; Feghali, K; Liu, C; Ajlouni, M; Movsas, B; Chetty, I [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: To investigate correlation of normal lung CT density changes with dose accuracy and outcome after SBRT for patients with early stage lung cancer. Methods: Dose distributions for patients originally planned and treated using a 1-D pencil beam-based (PB-1D) dose algorithm were retrospectively recomputed using algorithms: 3-D pencil beam (PB-3D), and model-based Methods: AAA, Acuros XB (AXB), and Monte Carlo (MC). Prescription dose was 12 Gy × 4 fractions. Planning CT images were rigidly registered to the followup CT datasets at 6–9 months after treatment. Corresponding dose distributions were mapped from the planning to followup CT images. Following the method of Palma et al .(1–2), Hounsfield Unit (HU) changes in lung density in individual, 5 Gy, dose bins from 5–45 Gy were assessed in the peri-tumor region, defined as a uniform, 3 cm expansion around the ITV(1). Results: There is a 10–15% displacement of the high dose region (40–45 Gy) with the model-based algorithms, relative to the PB method, due to the electron scattering of dose away from the tumor into normal lung tissue (Fig.1). Consequently, the high-dose lung region falls within the 40–45 Gy dose range, causing an increase in HU change in this region, as predicted by model-based algorithms (Fig.2). The patient with the highest HU change (∼110) had mild radiation pneumonitis, and the patient with HU change of ∼80–90 had shortness of breath. No evidence of pneumonitis was observed for the 3 patients with smaller CT density changes (<50 HU). Changes in CT densities, and dose-response correlation, as computed with model-based algorithms, are in excellent agreement with the findings of Palma et al. (1–2). Conclusion: Dose computed with PB (1D or 3D) algorithms was poorly correlated with clinically relevant CT density changes, as opposed to model-based algorithms. A larger cohort of patients is needed to confirm these results. This work was supported in part by a grant from Varian

  7. Path-integral computation of superfluid densities

    International Nuclear Information System (INIS)

    Pollock, E.L.; Ceperley, D.M.

    1987-01-01

    The normal and superfluid densities are defined by the response of a liquid to sample boundary motion. The free-energy change due to uniform boundary motion can be calculated by path-integral methods from the distribution of the winding number of the paths around a periodic cell. This provides a conceptually and computationally simple way of calculating the superfluid density for any Bose system. The linear-response formulation relates the superfluid density to the momentum-density correlation function, which has a short-ranged part related to the normal density and, in the case of a superfluid, a long-ranged part whose strength is proportional to the superfluid density. These facts are discussed in the context of path-integral computations and demonstrated for liquid 4 He along the saturated vapor-pressure curve. Below the experimental superfluid transition temperature the computed superfluid fractions agree with the experimental values to within the statistical uncertainties of a few percent in the computations. The computed transition is broadened by finite-sample-size effects

  8. Adaptive Bayesian inference on the mean of an infinite-dimensional normal distribution

    NARCIS (Netherlands)

    Belitser, E.; Ghosal, S.

    2003-01-01

    We consider the problem of estimating the mean of an infinite-break dimensional normal distribution from the Bayesian perspective. Under the assumption that the unknown true mean satisfies a "smoothness condition," we first derive the convergence rate of the posterior distribution for a prior that

  9. Modeling pore corrosion in normally open gold- plated copper connectors.

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien; Enos, David George; Serna, Lysle M.; Sorensen, Neil Robert

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.

  10. Why and how to normalize the factorial moments of intermittency

    International Nuclear Information System (INIS)

    Peschanski, R.

    1990-01-01

    The normalization of factorial moments of intermittency, which is often the subject of controverses, is justified and (re-)derived from the general assumption of multi-Poissonian statistical noise in the production of particles at high-energy. Correction factors for the horizontal vs. Vertical analyses are derived in general cases, including the factorial multi-bin correlation moments

  11. The normalization heuristic: an untested hypothesis that may misguide medical decisions.

    Science.gov (United States)

    Aberegg, Scott K; O'Brien, James M

    2009-06-01

    Medical practice is increasingly informed by the evidence from randomized controlled trials. When such evidence is not available, clinical hypotheses based on pathophysiological reasoning and common sense guide clinical decision making. One commonly utilized general clinical hypothesis is the assumption that normalizing abnormal laboratory values and physiological parameters will lead to improved patient outcomes. We refer to the general use of this clinical hypothesis to guide medical therapeutics as the "normalization heuristic". In this paper, we operationally define this heuristic and discuss its limitations as a rule of thumb for clinical decision making. We review historical and contemporaneous examples of normalization practices as empirical evidence for the normalization heuristic and to highlight its frailty as a guide for clinical decision making.

  12. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam; Shi, Yuexiang; Gao, Xin

    2014-01-01

    of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue

  13. CT Densitometry of the Lung in Healthy Nonsmokers with Normal Pulmonary Function

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Tack Sun; Chae, Eun Jin; Seo, Joon Beom; Jung, Young Ju; Oh, Yeon Mok; Lee, Sang Do [University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of)

    2012-09-15

    To investigate the upper normal limit of low attenuation area in healthy nonsmokers. A total of 36 nonsmokers with normal pulmonary function test underwent a CT scan. Six thresholds (-980 --930 HU) on inspiration CT and two thresholds (-950 and -910 HU) on expiration CT were used for obtaining low attenuation area. The mean lung density was obtained on both inspiration CT and expiration CT. Descriptive statistics of low attenuation area and the mean lung density, evaluation of difference of low attenuation area and the mean lung density in both sex and age groups, analysis of the relationship between demographic information and CT parameters were performed. Upper normal limit for low attenuation area was 12.96% on inspiration CT (-950 HU) and 9.48% on expiration CT (-910 HU). Upper normal limit for the mean lung density was -837.58 HU on inspiration CT and 686.82 HU on expiration CT. Low attenuation area and the mean lung density showed no significant differences in both sex and age groups. Body mass index (BMI) was negatively correlated with low attenuation area on inspiration CT (-950 HU, r = -0.398, p = 0.016) and positively correlated with the mean lung density on inspiration CT (r 0.539, p = 0.001) and expiration CT (r = 0.432, p = 0.009). Age and body surface area were not correlated with low attenuation area or the mean lung density. Low attenuation area on CT densitometry of the lung could be found in healthy nonsmokers with normal pulmonary function, and showed negative association with BMI. Reference values, such as range and upper normal limit for low attenuation area in healthy subjects could be helpful in quantitative analysis and follow up of early emphysema, using CT densitometry of the lung.

  14. Time evolution of regional CT density changes in normal lung after IMRT for NSCLC

    International Nuclear Information System (INIS)

    Bernchou, Uffe; Schytte, Tine; Bertelsen, Anders; Bentzen, Søren M.; Hansen, Olfred; Brink, Carsten

    2013-01-01

    Purpose: This study investigates the clinical radiobiology of radiation induced lung disease in terms of regional computed tomography (CT) density changes following intensity modulated radiotherapy (IMRT) for non-small-cell lung cancer (NSCLC). Methods: A total of 387 follow-up CT scans in 131 NSCLC patients receiving IMRT to a prescribed dose of 60 or 66 Gy in 2 Gy fractions were analyzed. The dose-dependent temporal evolution of the density change was analyzed using a two-component model, a superposition of an early, transient component and a late, persistent component. Results: The CT density of healthy lung tissue was observed to increase significantly (p 12 months. Conclusions: The radiobiology of lung injury may be analyzed in terms of CT density change. The initial transient change in density is consistent with radiation pneumonitis, while the subsequent stabilization of the density is consistent with pulmonary fibrosis

  15. The anisotropy of the cosmic background radiation from local dynamic density perturbations

    International Nuclear Information System (INIS)

    Dyer, C.C.; Ip, P.S.S.

    1988-01-01

    Contrary to the usual assumption, it is shown here that the anisotropy of the cosmic background radiation need not be dominated by perturbations at the last scattering surface. The results of computer simulations are shown in which local dynamic density perturbations, in the form of Swiss cheese holes with finite, uniform density central lumps, are the main source of anisotropy of the cosmic background radiation. (author)

  16. Energy vs. density on paths toward exact density functionals

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2018-01-01

    Recently, the progression toward more exact density functional theory has been questioned, implying a need for more formal ways to systematically measure progress, i.e. a “path”. Here I use the Hohenberg-Kohn theorems and the definition of normality by Burke et al. to define a path toward exactness...

  17. Effect of high density lipoproteins on permeability of rabbit aorta to low density lipoproteins

    International Nuclear Information System (INIS)

    Klimov, A.N.; Popov, V.A.; Nagornev, V.A.; Pleskov, V.M.

    1985-01-01

    A study was made on the effect of high density lipoproteins (HDL) on the permeability of rabbit aorta to low density lipoproteins (LDL) after intravenous administration of human HDL and human ( 125 I)LDL to normal and hypercholesterolemic rabbits. Evaluation of radioactivity in plasma and aorta has shown that the administration of a large dose of HDL decreased the aorta permeability rate for ( 125 I)LDL on an average by 19% in normal rabbits, and by 45% in rabbits with moderate hypercholesterolemia. A historadiographic study showed that HDL also decreased the vessel wall permeability to ( 125 I)LDL in normal and particularly in hypercholesterolemic animals. The suggestion was made that HDL at very high molar concentration can hamper LDL transportation through the intact endothelial layer into the intima due to the ability of HDL to compete with LDL in sites of low affinity on the surface of endothelial cells. (author)

  18. Incorporating assumption deviation risk in quantitative risk assessments: A semi-quantitative approach

    International Nuclear Information System (INIS)

    Khorsandi, Jahon; Aven, Terje

    2017-01-01

    Quantitative risk assessments (QRAs) of complex engineering systems are based on numerous assumptions and expert judgments, as there is limited information available for supporting the analysis. In addition to sensitivity analyses, the concept of assumption deviation risk has been suggested as a means for explicitly considering the risk related to inaccuracies and deviations in the assumptions, which can significantly impact the results of the QRAs. However, challenges remain for its practical implementation, considering the number of assumptions and magnitude of deviations to be considered. This paper presents an approach for integrating an assumption deviation risk analysis as part of QRAs. The approach begins with identifying the safety objectives for which the QRA aims to support, and then identifies critical assumptions with respect to ensuring the objectives are met. Key issues addressed include the deviations required to violate the safety objectives, the uncertainties related to the occurrence of such events, and the strength of knowledge supporting the assessments. Three levels of assumptions are considered, which include assumptions related to the system's structural and operational characteristics, the effectiveness of the established barriers, as well as the consequence analysis process. The approach is illustrated for the case of an offshore installation. - Highlights: • An approach for assessing the risk of deviations in QRA assumptions is presented. • Critical deviations and uncertainties related to their occurrence are addressed. • The analysis promotes critical thinking about the foundation and results of QRAs. • The approach is illustrated for the case of an offshore installation.

  19. Operating Characteristics of Statistical Methods for Detecting Gene-by-Measured Environment Interaction in the Presence of Gene-Environment Correlation under Violations of Distributional Assumptions.

    Science.gov (United States)

    Van Hulle, Carol A; Rathouz, Paul J

    2015-02-01

    Accurately identifying interactions between genetic vulnerabilities and environmental factors is of critical importance for genetic research on health and behavior. In the previous work of Van Hulle et al. (Behavior Genetics, Vol. 43, 2013, pp. 71-84), we explored the operating characteristics for a set of biometric (e.g., twin) models of Rathouz et al. (Behavior Genetics, Vol. 38, 2008, pp. 301-315), for testing gene-by-measured environment interaction (GxM) in the presence of gene-by-measured environment correlation (rGM) where data followed the assumed distributional structure. Here we explore the effects that violating distributional assumptions have on the operating characteristics of these same models even when structural model assumptions are correct. We simulated N = 2,000 replicates of n = 1,000 twin pairs under a number of conditions. Non-normality was imposed on either the putative moderator or on the ultimate outcome by ordinalizing or censoring the data. We examined the empirical Type I error rates and compared Bayesian information criterion (BIC) values. In general, non-normality in the putative moderator had little impact on the Type I error rates or BIC comparisons. In contrast, non-normality in the outcome was often mistaken for or masked GxM, especially when the outcome data were censored.

  20. Asymptotic normality of kernel estimator of $\\psi$-regression function for functional ergodic data

    OpenAIRE

    Laksaci ALI; Benziadi Fatima; Gheriballak Abdelkader

    2016-01-01

    In this paper we consider the problem of the estimation of the $\\psi$-regression function when the covariates take values in an infinite dimensional space. Our main aim is to establish, under a stationary ergodic process assumption, the asymptotic normality of this estimate.

  1. Reference Priors For Non-Normal Two-Sample Problems

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo, 1992) is applied to locationscale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the dierence, ratio and product of Normal means problems outside Normality, while explicitly

  2. Emerging Assumptions About Organization Design, Knowledge And Action

    Directory of Open Access Journals (Sweden)

    Alan Meyer

    2013-12-01

    Full Text Available Participants in the Organizational Design Community’s 2013 Annual Conference faced the challenge of “making organization design knowledge actionable.”  This essay summarizes the opinions and insights participants shared during the conference.  I reflect on these ideas, connect them to recent scholarly thinking about organization design, and conclude that seeking to make design knowledge actionable is nudging the community away from an assumption set based upon linearity and equilibrium, and toward a new set of assumptions based on emergence, self-organization, and non-linearity.

  3. Joint constraints on galaxy bias and σ8 through the N-pdf of the galaxy number density

    International Nuclear Information System (INIS)

    Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio; Sanz, José L.; Saar, Enn; Paredes, Silvestre

    2016-01-01

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on the bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ 8 ). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M r  ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar 8  = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h −1 Mpc. Different model selection criteria show that galaxy biasing is clearly favoured

  4. Changes in Physical Fitness, Bone Mineral Density and Body Composition During Inpatient Treatment of Underweight and Normal Weight Females with Longstanding Eating Disorders

    Directory of Open Access Journals (Sweden)

    Solfrid Bratland-Sanda

    2012-01-01

    Full Text Available The purpose of this study was to examine changes in aerobic fitness, muscular strength, bone mineral density (BMD and body composition during inpatient treatment of underweight and normal weight patients with longstanding eating disorders (ED. Twenty-nine underweight (BMI < 18.5, n = 7 and normal weight (BMI ≥ 18.5, n = 22 inpatients (mean (SD age: 31.0 (9.0 years, ED duration: 14.9 (8.8 years, duration of treatment: 16.6 (5.5 weeks completed this prospective naturalistic study. The treatment consisted of nutritional counseling, and 2 × 60 min weekly moderate intensive physical activity in addition to psychotherapy and milieu therapy. Underweight patients aimed to increase body weight with 0.5 kg/week until the weight gain goal was reached. Aerobic fitness, muscular strength, BMD and body composition were measured at admission and discharge. Results showed an increase in mean muscular strength, total body mass, fat mass, and body fat percentage, but not aerobic capacity, among both underweight and normal weight patients. Lumbar spine BMD increased among the underweight patients, no changes were observed in BMD among the normal weight patients. Three out of seven underweight patients were still underweight at discharge, and only three out of nine patients with excessive body fat (i.e., >33% managed to reduce body fat to normal values during treatment. These results calls for a more individualized treatment approach to achieve a more optimal body composition among both underweight and normal to overweight patients with longstanding ED.

  5. Developing TOPSIS method using statistical normalization for selecting knowledge management strategies

    Directory of Open Access Journals (Sweden)

    Amin Zadeh Sarraf

    2013-09-01

    Full Text Available Purpose: Numerous companies are expecting their knowledge management (KM to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. Design/methodology/approach: An extension of TOPSIS, a multi-attribute decision making (MADM technique, to a group decision environment is investigated. TOPSIS is a practical and useful technique for ranking and selection of a number of externally determined alternatives through distance measures. The entropy method is often used for assessing weights in the TOPSIS method. Entropy in information theory is a criterion uses for measuring the amount of disorder represented by a discrete probability distribution. According to decrease resistance degree of employees opposite of implementing a new strategy, it seems necessary to spot all managers’ opinion. The normal distribution considered the most prominent probability distribution in statistics is used to normalize gathered data. Findings: The results of this study show that by considering 6 criteria for alternatives Evaluation, the most appropriate KM strategy to implement  in our company was ‘‘Personalization’’. Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the approach such as normal distribution of sample and community. These assumptions can be changed in future work. Originality/value: This paper proposes an effective solution based on combined entropy and TOPSIS approach to help companies that need to evaluate and select KM strategies. In represented solution, opinions of all managers is gathered and normalized by using standard normal distribution and central limit theorem. Keywords: Knowledge management; strategy; TOPSIS; Normal distribution; entropy

  6. Direct measurements of neutral density depletion by two-photon absorption laser-induced fluorescence spectroscopy

    International Nuclear Information System (INIS)

    Aanesland, A.; Liard, L.; Leray, G.; Jolly, J.; Chabert, P.

    2007-01-01

    The ground state density of xenon atoms has been measured by spatially resolved laser-induced fluorescence spectroscopy with two-photon excitation in the diffusion chamber of a magnetized Helicon plasma. This technique allows the authors to directly measure the relative variations of the xenon atom density without any assumptions. A significant neutral gas density depletion was measured in the core of the magnetized plasma, in agreement with previous theoretical and experimental works. It was also found that the neutral gas density was depleted near the radial walls

  7. On the distinction between density and luminosity evolution

    International Nuclear Information System (INIS)

    Bahcall, J.N.

    1977-01-01

    It is shown that the assumptions of pure density evolution and pure luminosity evolution lead to observable differences in the distribution of sources for all convergent luminosity functions. The proof given is valid for sources with an arbitrary number of intrinisic luminosities (e.g., optical, infrared, and radio) and also holds in the special cases of mixed evolution that are considered. (author)

  8. Evaluating Approaches to Rendering Braille Text on a High-Density Pin Display.

    Science.gov (United States)

    Morash, Valerie S; Russomanno, Alexander; Gillespie, R Brent; OModhrain, Sile

    2017-10-13

    Refreshable displays for tactile graphics are typically composed of pins that have smaller diameters and spacing than standard braille dots. We investigated configurations of high-density pins to form braille text on such displays using non-refreshable stimuli produced with a 3D printer. Normal dot braille (diameter 1.5 mm) was compared to high-density dot braille (diameter 0.75 mm) wherein each normal dot was rendered by high-density simulated pins alone or in a cluster of pins configured in a diamond, X, or square; and to "blobs" that could result from covering normal braille and high-density multi-pin configurations with a thin membrane. Twelve blind participants read MNREAD sentences displayed in these conditions. For high-density simulated pins, single pins were as quickly and easily read as normal braille, but diamond, X, and square multi-pin configurations were slower and/or harder to read than normal braille. We therefore conclude that as long as center-to-center dot spacing and dot placement is maintained, the dot diameter may be open to variability for rendering braille on a high density tactile display.

  9. The Best and the Rest: Revisiting the Norm of Normality of Individual Performance

    Science.gov (United States)

    O'Boyle, Ernest, Jr.; Aguinis, Herman

    2012-01-01

    We revisit a long-held assumption in human resource management, organizational behavior, and industrial and organizational psychology that individual performance follows a Gaussian (normal) distribution. We conducted 5 studies involving 198 samples including 633,263 researchers, entertainers, politicians, and amateur and professional athletes.…

  10. Low density in liver of idiopathic portal hypertension

    International Nuclear Information System (INIS)

    Ishito, Hiroyuki

    1988-01-01

    In order to evaluate the diagnostic value of low density in liver on computed tomography (CT), CT scans of 11 patients with idiopathic portal hypertension (IPH) were compared with those from 22 cirrhotic patients, two patients with scarred liver and 16 normal subjects. Low densities on plain CT scans in patients with IPH were distinctly different from those observed in normal liver. Some of the low densities had irregular shape with unclear margin and were scattered near the liver surface, and others had vessel-like structures with unclear margin and extended as far as near the liver surface. Ten of the 11 patients with IPH had low densities mentioned above, while none of the 22 cirrhotic patients had such low densities. The present results suggest that the presence of low densities in liver on plain CT scan is clinically beneficial in diagnosis of IPH. (author)

  11. Capturing Assumptions while Designing a Verification Model for Embedded Systems

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    A formal proof of a system correctness typically holds under a number of assumptions. Leaving them implicit raises the chance of using the system in a context that violates some assumptions, which in return may invalidate the correctness proof. The goal of this paper is to show how combining

  12. Sensitivity of C-Band Polarimetric Radar-Based Drop Size Distribution Measurements to Maximum Diameter Assumptions

    Science.gov (United States)

    Carey, Lawrence D.; Petersen, Walter A.

    2011-01-01

    Mission (GPM/PMM Science Team)-funded study is to document the sensitivity of DSD measurements, including estimates of D0, from C-band Z(sub dr) and reflectivity to this range of D(sub max) assumptions. For this study, GPM Ground Validation 2DVD's were operated under the scanning domain of the UAHuntsville ARMOR C-band dual-polarimetric radar. Approximately 7500 minutes of DSD data were collected and processed to create gamma size distribution parameters using a truncated method of moments approach. After creating the gamma parameter datasets the DSD's were then used as input to a T-matrix model for computation of polarimetric radar moments at C-band. All necessary model parameterizations, such as temperature, drop shape, and drop fall mode, were fixed at typically accepted values while the D(sub max) assumption was allowed to vary in sensitivity tests. By hypothesizing a DSD model with D(sub max) (fit) from which the empirical fit to D0 = F[Z(sub dr)] was derived via non-linear least squares regression and a separate reference DSD model with D(sub max) (truth), bias and standard error in D0 retrievals were estimated in the presence of Z(sub dr) measurement error and hypothesized mismatch in D(sub max) assumptions. Although the normalized standard error for D0 = F[Z(sub dr)r] can increase slightly (as much as from 11% to 16% for all 7500 DSDs) when the D(sub max) (fit) does not match D(sub max) (truth), the primary impact of uncertainty in D(sub max) is a potential increase in normalized bias error in D0 (from 0% to as much as 10% over all 7500 DSDs, depending on the extent of the mismatch between D(sub max) (fit) and D(sub max) (truth)). For DSDs characterized by large Z(sub dr) (Z(sub dr) > 1.5 to 2.0 dB), the normalized bias error for D0 estimation at C-band is sometimes unacceptably large (> 10%), again depending on the extent of the hypothesized D(sub max) mismatch. Modeled errors in D0 retrievals from Z(sub dr) at C-band are demonstrated in detail and comparedo

  13. Exhaustible natural resources, normal prices and intertemporal equilibrium

    OpenAIRE

    Parrinello, Sergio

    2003-01-01

    This paper proposes an extension of the classical theory of normal prices to an n-commodity economy with exhaustible natural resources. The central idea is developed by two analytical steps. Firstly, it is assumed that a given flow of an exhaustible resource in short supply is combined with the coexistence of two methods of production using that resource. Sraffa’s equations are reinterpreted by adopting the concept of effectual supply of natural resources and avoiding the assumption of perfec...

  14. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  15. Critically Challenging Some Assumptions in HRD

    Science.gov (United States)

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  16. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  17. The Arundel Assumption And Revision Of Some Large-Scale Maps ...

    African Journals Online (AJOL)

    The rather common practice of stating or using the Arundel Assumption without reference to appropriate mapping standards (except mention of its use for graphical plotting) is a major cause of inaccuracies in map revision. This paper describes an investigation to ascertain the applicability of the Assumption to the revision of ...

  18. The usefulness of information on HDL-cholesterol: potential pitfalls of conventional assumptions

    Directory of Open Access Journals (Sweden)

    Furberg Curt D

    2001-05-01

    Full Text Available Abstract Treatment decisions related to disease prevention are often based on two conventional and related assumptions. First, an intervention-induced change in a surrogate marker (such as high-density lipoprotein [HDL]-cholesterol in the desired direction translates into health benefits (such as reduction in coronary events. Second, it is unimportant which interventions are used to alter surrogate markers, since an intervention benefit is independent of the means by which it is achieved. The scientific foundation for these assumptions has been questioned. In this commentary, the appropriateness of relying on low levels of HDL-cholesterol for treatment decisions is reviewed. The Veterans Affairs - HDL-Cholesterol Intervention Trial (VA-HIT investigators recently reported that only 23% of the gemfibrozil-induced relative reduction in risk of coronary events observed in the trial could be explained by changes in HDL-cholesterol between baseline and the 1-year visit. Thus, 77% of the health benefit to the participants was unexplained. Other possible explanations are that gemfibrozil has multiple mechanisms of action, disease manifestations are multifactorial, and laboratory measurements of HDL-cholesterol are imprecise. The wisdom of relying on levels and changes in surrogate markers such as HDL-cholesterol to make decisions about treatment choices should questioned. It seems better to rely on direct evidence of health benefits and to prescribe specific interventions that have been shown to reduce mortality and morbidity. Since extrapolations based on surrogate markers may not be in patients' best interest, the practice of medicine ought to be evidence-based.

  19. Derivation of the density functional theory from the cluster expansion.

    Science.gov (United States)

    Hsu, J Y

    2003-09-26

    The density functional theory is derived from a cluster expansion by truncating the higher-order correlations in one and only one term in the kinetic energy. The formulation allows self-consistent calculation of the exchange correlation effect without imposing additional assumptions to generalize the local density approximation. The pair correlation is described as a two-body collision of bound-state electrons, and modifies the electron- electron interaction energy as well as the kinetic energy. The theory admits excited states, and has no self-interaction energy.

  20. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  1. Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density

    Energy Technology Data Exchange (ETDEWEB)

    Arnalte-Mur, Pablo; Martínez, Vicent J. [Observatori Astronòmic de la Universitat de València, C/ Catedràtic José Beltrán, 2, 46980 Paterna, València (Spain); Vielva, Patricio; Sanz, José L. [Instituto de Física de Cantabria (CSIC-UC), Avda. de Los Castros s/n, E-39005—Santander (Spain); Saar, Enn [Cosmology Department, Tartu Observatory, Observatooriumi 1, Tõravere (Estonia); Paredes, Silvestre, E-mail: pablo.arnalte@uv.es, E-mail: vielva@ifca.unican.es, E-mail: martinez@uv.es, E-mail: sanz@ifca.unican.es, E-mail: saar@to.ee, E-mail: silvestre.paredes@upct.es [Departamento de Matemática Aplicada y Estadística, Universidad Politécnica de Cartagena, C/Dr. Fleming s/n, 30203 Cartagena (Spain)

    2016-03-01

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on the bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.

  2. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  3. Vibrational spectra, molecular structure, natural bond orbital, first order hyperpolarizability, thermodynamic analysis and normal coordinate analysis of Salicylaldehyde p-methylphenylthiosemicarbazone by density functional method

    Science.gov (United States)

    Porchelvi, E. Elamurugu; Muthu, S.

    2015-01-01

    The thiosemicarbazone compound, Salicylaldehyde p-methylphenylthiosemicarbazone (abbreviated as SMPTSC) was synthesized and characterized by FTIR, FT-Raman and UV. Density functional (DFT) calculations have been carried out for the title compound by performing DFT level of theory using B3LYP/6-31++G(d,p) basis set. The molecular geometry and vibrational frequencies were calculated and compared with the experimental data. The detailed interpretation of the vibrational spectra has been carried out with aid of normal coordinate analysis (NCA) following the scaled quantum mechanical force field methodology. The electronic dipole moment (μD) and the first hyperpolarizability (βtot) values of the investigated molecule were computed using density functional theory (DFT/B3LYP) with 6-311++G(d,p) basis set. The stability and charge delocalization of the molecule was studied by natural bond orbital (NBO) analysis. Thearomaticities of the phenyl rings were studied using the standard harmonic oscillator model of aromaticity (HOMA) index. Mulliken population analysis on atomic charges is also calculated. The molecule orbital contributions are studied by density of energy states (DOSs).

  4. Significant effect of topographic normalization of airborne LiDAR data on the retrieval of plant area index profile in mountainous forests

    Science.gov (United States)

    Liu, Jing; Skidmore, Andrew K.; Heurich, Marco; Wang, Tiejun

    2017-10-01

    As an important metric for describing vertical forest structure, the plant area index (PAI) profile is used for many applications including biomass estimation and wildlife habitat assessment. PAI profiles can be estimated with the vertically resolved gap fraction from airborne LiDAR data. Most research utilizes a height normalization algorithm to retrieve local or relative height by assuming the terrain to be flat. However, for many forests this assumption is not valid. In this research, the effect of topographic normalization of airborne LiDAR data on the retrieval of PAI profile was studied in a mountainous forest area in Germany. Results show that, although individual tree height may be retained after topographic normalization, the spatial arrangement of trees is changed. Specifically, topographic normalization vertically condenses and distorts the PAI profile, which consequently alters the distribution pattern of plant area density in space. This effect becomes more evident as the slope increases. Furthermore, topographic normalization may also undermine the complexity (i.e., canopy layer number and entropy) of the PAI profile. The decrease in PAI profile complexity is not solely determined by local topography, but is determined by the interaction between local topography and the spatial distribution of each tree. This research demonstrates that when calculating the PAI profile from airborne LiDAR data, local topography needs to be taken into account. We therefore suggest that for ecological applications, such as vertical forest structure analysis and modeling of biodiversity, topographic normalization should not be applied in non-flat areas when using LiDAR data.

  5. Percentile estimation using the normal and lognormal probability distribution

    International Nuclear Information System (INIS)

    Bement, T.R.

    1980-01-01

    Implicitly or explicitly percentile estimation is an important aspect of the analysis of aerial radiometric survey data. Standard deviation maps are produced for quadrangles which are surveyed as part of the National Uranium Resource Evaluation. These maps show where variables differ from their mean values by more than one, two or three standard deviations. Data may or may not be log-transformed prior to analysis. These maps have specific percentile interpretations only when proper distributional assumptions are met. Monte Carlo results are presented in this paper which show the consequences of estimating percentiles by: (1) assuming normality when the data are really from a lognormal distribution; and (2) assuming lognormality when the data are really from a normal distribution

  6. Research of mechanism of density lock

    International Nuclear Information System (INIS)

    Wang Shengfei; Yan Changqi; Gu Haifeng

    2010-01-01

    Mechanism of density lock was analyzed according to the work conditions of density lock. The results showed that: the stratification with no disturbance satisfied the work conditions of density lock; fluids between the stratification were not mixed at the condition of connected to each other; the density lock can be open automatically by controlled the pressure balance at the stratification. When disturbance existed, the stratification might be broken and mass would be transferred by convection. The stability of stratification can be enhanced by put the special structure in density lock to ensure the normal work of density lock. At last, the minimum of heat loss in density lock was also analyzed. (authors)

  7. A locally adaptive normal distribution

    DEFF Research Database (Denmark)

    Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren

    2016-01-01

    entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models......The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density...

  8. Unrealistic Assumptions in Economics: an Analysis under the Logic of Socioeconomic Processes

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2014-11-01

    Full Text Available The realism of assumptions is an ongoing debate within the philosophy of economics. One of the most referenced papers in this matter belongs to Milton Friedman. He defends the use of unrealistic assumptions, not only because of a pragmatic issue, but also the intrinsic difficulties of determining the extent of realism. On the other hand, realists have criticized (and still do today the use of unrealistic assumptions - such as the assumption of rational choice, perfect information, homogeneous goods, etc. However, they did not accompany their statements with a proper epistemological argument that supports their positions. In this work it is expected to show that the realism of (a particular sort of assumptions is clearly relevant when examining economic models, since the system under study (the real economies is not compatible with logic of invariance and of mechanisms, but with the logic of possibility trees. Because of this, models will not function as tools for predicting outcomes, but as representations of alternative scenarios, whose similarity to the real world will be examined in terms of the verisimilitude of a class of model assumptions

  9. Azimuthal asymmetries of the charged particle densities in EAS in the range of KASCADE-Grande

    International Nuclear Information System (INIS)

    Sima, O.; Morariu, C.; Manailescu, C.; Rebel, H.; Haungs, A.

    2009-03-01

    The reconstruction of Extended Air Showers (EAS) observed by ground level particle detectors is based on the characteristics of observables like particle lateral density (PLD), arrival time signals etc. Lateral densities, inferred from detector data, are usually parameterized by applying various lateral distribution functions (LDF). The LDFs are used in turn for evaluating quantities like the total number of particles, the density at particular radial distances. Typical expressions for LDFs anticipate azimuthal symmetry of the density around the shower axis. The deviations of the particle lateral density from this assumption are smoothed out in the case of compact arrays like KASCADE, but not in the case of arrays like Grande, which only sample a smaller part of the azimuthal variation. In this report we discuss the origin of the asymmetry: geometric, attenuation and geomagnetic effects. Geometric effects occur in the case of inclined showers, due to the fact that the observations are made in a plane different from the intrinsic shower plane. Hence the projection procedure from the observational plane to the relevant normal shower plane plays a significant role. Attenuation effects arise from the differences between the distances travelled by particles that reach the ground at the same radial coordinate but with various azimuthal positions in the case of inclined showers. The influence of the geomagnetic field distorts additionally the charged particle distributions in a way specific to the geomagnetic location. Based on dedicated CORSIKA simulations we have evaluated the magnitude of the effects. Focused to geometric and attenuation effects, procedures for minimizing the effects of the azimuthal asymmetry of lateral density in the intrinsic shower plane were developed. The consequences of the reconstruction of the charge particle sizes determined with the Grande array are also discussed and a procedure for practical application of restoring the azimuthal symmetry

  10. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    Science.gov (United States)

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  11. Evolution of Requirements and Assumptions for Future Exploration Missions

    Science.gov (United States)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  12. Changing Assumptions and Progressive Change in Theories of Strategic Organization

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Hallberg, Niklas L.

    2017-01-01

    are often decoupled from the results of empirical testing, changes in assumptions seem closely intertwined with theoretical progress. Using the case of the resource-based view, we suggest that progressive change in theories of strategic organization may come about as a result of scholarly debate and dispute......A commonly held view is that strategic organization theories progress as a result of a Popperian process of bold conjectures and systematic refutations. However, our field also witnesses vibrant debates or disputes about the specific assumptions that our theories rely on, and although these debates...... over what constitutes proper assumptions—even in the absence of corroborating or falsifying empirical evidence. We also discuss how changing assumptions may drive future progress in the resource-based view....

  13. Magnetization of High Density Hadronic Fluid

    DEFF Research Database (Denmark)

    Bohr, Henrik; Providencia, Constanca; da Providencia, João

    2012-01-01

    In the present paper the magnetization of a high density relativistic fluid of elementary particles is studied. At very high densities, such as may be found in the interior of a neutron star, when the external magnetic field is gradually increased, the energy of the normal phase of the fluid...... in the particle fluid. For nuclear densities above 2 to 3 rho(0), where rho(0) is the equilibrium nuclear density, the resulting magnetic field turns out to be rather huge, of the order of 10(17) Gauss....

  14. On the choice of lens density profile in time delay cosmography

    Science.gov (United States)

    Sonnenfeld, Alessandro

    2018-03-01

    Time delay lensing is a mature and competitive cosmological probe. However, it is limited in accuracy by the well-known problem of the mass-sheet degeneracy: too rigid assumptions on the density profile of the lens can potentially bias the inference on cosmological parameters. I investigate the degeneracy between the choice of the lens density profile and the inference on the Hubble constant, focusing on double image systems. By expanding lensing observables in terms of the local derivatives of the lens potential around the Einstein radius, and assuming circular symmetry, I show that 3 degrees of freedom in the radial direction are necessary to achieve a few per cent accuracy in the time-delay distance. Additionally, while the time delay is strongly dependent on the second derivative of the potential, observables typically used to constrain lens models in time-delay studies, such as image position and radial magnification information, are mostly sensitive to the first and third derivatives, making it very challenging to accurately determine time-delay distances with lensing data alone. Tests on mock observations show that the assumption of a power-law density profile results in a 5 per cent average bias on H0, with a 6 per cent scatter. Using a more flexible model and adding unbiased velocity dispersion constraints allows me to obtain an inference with 1 per cent accuracy. A power-law model can still provide 3 per cent accuracy if velocity dispersion measurements are used to constrain its slope. Although this study is based on the assumption of axisymmetry, its main findings can be generalized to cases with moderate ellipticity.

  15. Resolvability of regional density structure

    Science.gov (United States)

    Plonka, A.; Fichtner, A.

    2016-12-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convectivemotion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravityprovide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling,making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assessif 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within thecrust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we performprincipal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish theextent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrainedindependently. Since the density imprint we observe is not exclusively linked to travel times and amplitudes of specific phases,we consider waveform differences between complete seismograms. We test the method using a known smooth model of the crust and seismograms with clear Love and Rayleigh waves, showing that - as expected - the first principal kernel maximizes sensitivity to SH and SV velocity structure, respectively, and that the leakage between S velocity, P velocity and density parameter spaces is minimal in the chosen setup. Next, we apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density

  16. Investigating the Assumptions of Uses and Gratifications Research

    Science.gov (United States)

    Lometti, Guy E.; And Others

    1977-01-01

    Discusses a study designed to determine empirically the gratifications sought from communication channels and to test the assumption that individuals differentiate channels based on gratifications. (MH)

  17. Assessing framing assumptions in quantitative health impact assessments: a housing intervention example.

    Science.gov (United States)

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2013-09-01

    Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. High-density lipoprotein apolipoproteins in urine: I. Characterization in normal subjects and in patients with proteinuria.

    Science.gov (United States)

    Gomo, Z A; Henderson, L O; Myrick, J E

    1988-09-01

    A high-resolution two-dimensional electrophoretic method for protein, with silver staining, has been used to characterize and identify urinary high-density-lipoprotein apolipoproteins (HDL-Apos) and their isoforms in healthy subjects and in patients with kidney disease. Analytical techniques based on both molecular mass and ultracentrifugal flotation properties were used to isolate urinary lipoprotein particles with characteristics identical to those of HDL in plasma. HDL-Apos identified in urine of normal subjects and patients with glomerular proteinuria were Apos A-I, A-II, and C. Five isoforms of Apo A-I were present. Immunostaining of electroblotted proteins further confirmed the presence of HDL-Apos in urine. Creatinine clearance rate was decreased in the patients with proteinuria, and ranged from 32.5 to 40 mL/min. Concentrations of cholesterol and triglycerides in serum were greater in the patients' group, whereas mean HDL-cholesterol (0.68, SD 0.10 mmol/L) and Apo A-I (0.953, SD 0.095 g/L) were significantly (each P less than 0.01) lower. Results of this study suggest that measurement of urinary Apo A-I will reflect excretion of HDL in urine.

  19. Modelling carbonaceous aerosol from residential solid fuel burning with different assumptions for emissions

    Directory of Open Access Journals (Sweden)

    R. Ots

    2018-04-01

    Full Text Available Evidence is accumulating that emissions of primary particulate matter (PM from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012, as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source. The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist – all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than

  20. Modelling carbonaceous aerosol from residential solid fuel burning with different assumptions for emissions

    Science.gov (United States)

    Ots, Riinu; Heal, Mathew R.; Young, Dominique E.; Williams, Leah R.; Allan, James D.; Nemitz, Eiko; Di Marco, Chiara; Detournay, Anais; Xu, Lu; Ng, Nga L.; Coe, Hugh; Herndon, Scott C.; Mackenzie, Ian A.; Green, David C.; Kuenen, Jeroen J. P.; Reis, Stefan; Vieno, Massimo

    2018-04-01

    Evidence is accumulating that emissions of primary particulate matter (PM) from residential wood and coal combustion in the UK may be underestimated and/or spatially misclassified. In this study, different assumptions for the spatial distribution and total emission of PM from solid fuel (wood and coal) burning in the UK were tested using an atmospheric chemical transport model. Modelled concentrations of the PM components were compared with measurements from aerosol mass spectrometers at four sites in central and Greater London (ClearfLo campaign, 2012), as well as with measurements from the UK black carbon network.The two main alternative emission scenarios modelled were Base4x and combRedist. For Base4x, officially reported PM2.5 from the residential and other non-industrial combustion source sector were increased by a factor of four. For the combRedist experiment, half of the baseline emissions from this same source were redistributed by residential population density to simulate the effect of allocating some emissions to the smoke control areas (that are assumed in the national inventory to have no emissions from this source). The Base4x scenario yielded better daily and hourly correlations with measurements than the combRedist scenario for year-long comparisons of the solid fuel organic aerosol (SFOA) component at the two London sites. However, the latter scenario better captured mean measured concentrations across all four sites. A third experiment, Redist - all emissions redistributed linearly to population density, is also presented as an indicator of the maximum concentrations an assumption like this could yield.The modelled elemental carbon (EC) concentrations derived from the combRedist experiments also compared well with seasonal average concentrations of black carbon observed across the network of UK sites. Together, the two model scenario simulations of SFOA and EC suggest both that residential solid fuel emissions may be higher than inventory

  1. Vibrational spectroscopic studies, normal co-ordinate analysis, first order hyperpolarizability, HOMO-LUMO of midodrine by using density functional methods.

    Science.gov (United States)

    Shahidha, R; Al-Saadi, Abdulaziz A; Muthu, S

    2015-01-05

    The FTIR (4000-400 cm(-1)), FT-Raman (4000-100 cm(-1)) and UV-Visible (400-200 nm) spectra of midodrine were recorded in the condensed state. The complete vibrational frequencies, optimized geometry, intensity of vibrational bands and atomic charges were obtained by using Density Functional Theory (DFT) with the help of 6-311++G(d,p) basis set. The first order hyperpolarizability (β) and related properties (μ, α and Δα) of this molecular system were calculated by using DFT/6-311++G(d,p) method based on the finite-field approach. The assignments of the vibrational spectra have been carried out with the help of Normal Co-ordinate Analysis (NCA) following the scaled quantum mechanical force methodology. Stability of the molecule arising from hyper conjugative interactions, charge delocalization has been analyzed using NBO analysis. From the recorded UV-Visible spectrum, the electronic properties such as excitation energies, oscillator strength and wavelength are calculated by DFT in water and gas methods using 6-311++G(d,p) basis set. The calculated HOMO and LUMO energies confirm that charge transfer occurs within the molecule. Besides MEP, NLO and thermodynamic properties were also calculated and interpreted. The electron density-based local reactivity descriptor such as Fukui functions was calculated to explain the chemical selectivity or reactivity site in midodrine. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. The Emperors sham - wrong assumption that sham needling is sham.

    Science.gov (United States)

    Lundeberg, Thomas; Lund, Iréne; Näslund, Jan; Thomas, Moolamanil

    2008-12-01

    During the last five years a large number of randomised controlled clinical trials (RCTs) have been published on the efficacy of acupuncture in different conditions. In most of these studies verum is compared with sham acupuncture. In general both verum and sham have been found to be effective, and often with little reported difference in outcome. This has repeatedly led to the conclusion that acupuncture is no more effective than placebo treatment. However, this conclusion is based on the assumption that sham acupuncture is inert. Since sham acupuncture evidently is merely another form of acupuncture from the physiological perspective, the assumption that sham is sham is incorrect and conclusions based on this assumption are therefore invalid. Clinical guidelines based on such conclusions may therefore exclude suffering patients from valuable treatments.

  3. The homogeneous marginal utility of income assumption

    NARCIS (Netherlands)

    Demuynck, T.

    2015-01-01

    We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and

  4. Recognising the Effects of Costing Assumptions in Educational Business Simulation Games

    Science.gov (United States)

    Eckardt, Gordon; Selen, Willem; Wynder, Monte

    2015-01-01

    Business simulations are a powerful way to provide experiential learning that is focussed, controlled, and concentrated. Inherent in any simulation, however, are numerous assumptions that determine feedback, and hence the lessons learnt. In this conceptual paper we describe some common cost assumptions that are implicit in simulation design and…

  5. Characteristics of bone turnover in the long bone metaphysis fractured patients with normal or low Bone Mineral Density (BMD.

    Directory of Open Access Journals (Sweden)

    Christoph Wölfl

    Full Text Available The incidence of osteoporotic fractures increases as our population ages. Until now, the exact biochemical processes that occur during the healing of metaphyseal fractures remain unclear. Diagnostic instruments that allow a dynamic insight into the fracture healing process are as yet unavailable. In the present matched pair analysis, we study the time course of the osteoanabolic markers bone specific alkaline phosphatase (BAP and transforming growth factor β1 (TGFβ1, as well as the osteocatabolic markers crosslinked C-telopeptide of type-I-collagen (β-CTX and serum band 5 tartrate-resistant acid phosphatase (TRAP5b, during the healing of fractures that have a low level of bone mineral density (BMD compared with fractures that have a normal BMD. Between March 2007 and February 2009, 30 patients aged older than 50 years who suffered a metaphyseal fracture were included in our study. BMDs were verified by dual energy Xray absorptiometry (DXEA scans. The levels of BTMs were examined over an 8-week period. Osteoanabolic BAP levels in those with low levels of BMD were significantly different from the BAP levels in those with normal BMD. BAP levels in the former group increased constantly, whereas the latter group showed an initial strong decrease in BAP followed by slowly rising values. Osteocatabolic β-CTX increased in the bone of the normal BMD group constantly, whereas these levels decreased significantly in the bone of the group with low BMD from the first week. TRAP5b was significantly reduced in the low level BMD group. With this work, we conduct first insights into the molecular biology of the fracture healing process in patients with low levels of BMD that explains the mechanism of its fracture healing. The results may be one reason for the reduced healing qualities in bones with low BMD.

  6. Extended screened exchange functional derived from transcorrelated density functional theory.

    Science.gov (United States)

    Umezawa, Naoto

    2017-09-14

    We propose a new formulation of the correlation energy functional derived from the transcorrelated method in use in density functional theory (TC-DFT). An effective Hamiltonian, H TC , is introduced by a similarity transformation of a many-body Hamiltonian, H, with respect to a complex function F: H TC =1FHF. It is proved that an expectation value of H TC for a normalized single Slater determinant, D n , corresponds to the total energy: E[n] = ⟨Ψ n |H|Ψ n ⟩/⟨Ψ n |Ψ n ⟩ = ⟨D n |H TC |D n ⟩ under the two assumptions: (1) The electron density nr associated with a trial wave function Ψ n = D n F is v-representable and (2) Ψ n and D n give rise to the same electron density nr. This formulation, therefore, provides an alternative expression of the total energy that is useful for the development of novel correlation energy functionals. By substituting a specific function for F, we successfully derived a model correlation energy functional, which resembles the functional form of the screened exchange method. The proposed functional, named the extended screened exchange (ESX) functional, is described within two-body integrals and is parametrized for a numerically exact correlation energy of the homogeneous electron gas. The ESX functional does not contain any ingredients of (semi-)local functionals and thus is totally free from self-interactions. The computational cost for solving the self-consistent-field equation is comparable to that of the Hartree-Fock method. We apply the ESX functional to electronic structure calculations for a solid silicon, H - ion, and small atoms. The results demonstrate that the TC-DFT formulation is promising for the systematic improvement of the correlation energy functional.

  7. Comparative Interpretation of Classical and Keynesian Fiscal Policies (Assumptions, Principles and Primary Opinions

    Directory of Open Access Journals (Sweden)

    Engin Oner

    2015-06-01

    Full Text Available Adam Smith being its founder, in the Classical School, which gives prominence to supply and adopts an approach of unbiased finance, the economy is always in a state of full employment equilibrium. In this system of thought, the main philosophy of which is budget balance, that asserts that there is flexibility between prices and wages and regards public debt as an extraordinary instrument, the interference of the state with the economic and social life is frowned upon. In line with the views of the classical thought, the classical fiscal policy is based on three basic assumptions. These are the "Consumer State Assumption", the assumption accepting that "Public Expenditures are Always Ineffectual" and the assumption concerning the "Impartiality of the Taxes and Expenditure Policies Implemented by the State". On the other hand, the Keynesian School founded by John Maynard Keynes, gives prominence to demand, adopts the approach of functional finance, and asserts that cases of underemployment equilibrium and over-employment equilibrium exist in the economy as well as the full employment equilibrium, that problems cannot be solved through the invisible hand, that prices and wages are strict, the interference of the state is essential and at this point fiscal policies have to be utilized effectively.Keynesian fiscal policy depends on three primary assumptions. These are the assumption of "Filter State", the assumption that "public expenditures are sometimes effective and sometimes ineffective or neutral" and the assumption that "the tax, debt and expenditure policies of the state can never be impartial". 

  8. Determining Bounds on Assumption Errors in Operational Analysis

    Directory of Open Access Journals (Sweden)

    Neal M. Bengtson

    2014-01-01

    Full Text Available The technique of operational analysis (OA is used in the study of systems performance, mainly for estimating mean values of various measures of interest, such as, number of jobs at a device and response times. The basic principles of operational analysis allow errors in assumptions to be quantified over a time period. The assumptions which are used to derive the operational analysis relationships are studied. Using Karush-Kuhn-Tucker (KKT conditions bounds on error measures of these OA relationships are found. Examples of these bounds are used for representative performance measures to show limits on the difference between true performance values and those estimated by operational analysis relationships. A technique for finding tolerance limits on the bounds is demonstrated with a simulation example.

  9. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    Science.gov (United States)

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the

  10. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  11. Migration using a transversely isotropic medium with symmetry normal to the reflector dip

    KAUST Repository

    Alkhalifah, Tariq Ali; Sava, P.

    2011-01-01

    A transversely isotropic (TI) model in which the tilt is constrained to be normal to the dip (DTI model) allows for simplifications in the imaging and velocity model building efforts as compared to a general TI (TTI) model. Although this model cannot be represented physically in all situations, for example, in the case of conflicting dips, it handles arbitrary reflector orientations under the assumption of symmetry axis normal to the dip. Using this assumption, we obtain efficient downward continuation algorithms compared to the general TTI ones, by utilizing the reflection features of such a model. Phase-shift migration can be easily extended to approximately handle lateral inhomogeneity using, for example, the split-step approach. This is possible because, unlike the general TTI case, the DTI model reduces to VTI for zero dip. These features enable a process in which we can extract velocity information by including tools that expose inaccuracies in the velocity model in the downward continuation process. We test this model on synthetic data corresponding to a general TTI medium and show its resilience. 2011 Tariq Alkhalifah and Paul Sava.

  12. Formalization and Analysis of Reasoning by Assumption

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning

  13. Psychopatholgy, fundamental assumptions and CD-4 T lymphocyte ...

    African Journals Online (AJOL)

    In addition, we explored whether psychopathology and negative fundamental assumptions in ... Method: Self-rating questionnaires to assess depressive symptoms, ... associated with all participants scoring in the positive range of the FA scale.

  14. The clinical significance of normal mammograms and normal sonograms in patients with palpable abnormalities of the breast

    International Nuclear Information System (INIS)

    Lee, Jin Hwa; Yoon, Seong Kuk; Choi, Sun Seob; Nam, Kyung Jin; Cho, Se Heon; Kim, Dae Cheol; Kim, Jung Il; Kim, Eun Kyung

    2006-01-01

    We wanted to evaluate the clinical significance of normal mammograms and normal sonograms in patients with palpable abnormalities of the breast. From Apr 2003 to Feb 2005, 107 patients with 113 palpable abnormalities who had combined normal sonographic and normal mammographic findings were retrospectively studied. The evaluated parameters included age of the patients, the clinical referrals, the distribution of the locations of the palpable abnormalities, whether there was a past surgical history, the mammographic densities and the sonographic echo patterns (purely hyperechoic fibrous tissue, mixed fibroglandular breast tissue, predominantly isoechoic glandular tissue and isoechoic subcutaneous fat tissue) at the sites of clinical concern, whether there was a change in imaging and/or the physical examination results at follow-up, and whether there were biopsy results. This study period was chosen to allow a follow-up period of at least 12 months. The patients' ages ranged from 22 to 66 years (mean age: 48.8 years) and 62 (58%) of the 107 patients were between 41 and 50 years old (58%). The most common location of the palpable abnormalities was the upper outer portion of the breast (45%) and most of the mammographic densities were dense patterns (BI-RADS Type 3 or 4: 91%). Our cases showed similar distribution for all the types of sonographic echo patterns. 23 patients underwent biopsy; all the biopsy specimens were benign. For the 84 patients with 90 palpable abnormalities who were followed, there was no interval development of breast cancer in the areas of clinical concern. Our results suggest that we can follow up and prevent unnecessary biopsies in women with palpable abnormalities when both the mammography and ultrasonography show normal tissue, but this study was limited by its small sample size. Therefore, a larger study will be needed to better define the negative predictive value of combined normal sonographic and mammographic findings

  15. Neurite density imaging versus imaging of microscopic anisotropy in diffusion MRI: A model comparison using spherical tensor encoding.

    Science.gov (United States)

    Lampinen, Björn; Szczepankiewicz, Filip; Mårtensson, Johan; van Westen, Danielle; Sundgren, Pia C; Nilsson, Markus

    2017-02-15

    In diffusion MRI (dMRI), microscopic diffusion anisotropy can be obscured by orientation dispersion. Separation of these properties is of high importance, since it could allow dMRI to non-invasively probe elongated structures such as neurites (axons and dendrites). However, conventional dMRI, based on single diffusion encoding (SDE), entangles microscopic anisotropy and orientation dispersion with intra-voxel variance in isotropic diffusivity. SDE-based methods for estimating microscopic anisotropy, such as the neurite orientation dispersion and density imaging (NODDI) method, must thus rely on model assumptions to disentangle these features. An alternative approach is to directly quantify microscopic anisotropy by the use of variable shape of the b-tensor. Along those lines, we here present the 'constrained diffusional variance decomposition' (CODIVIDE) method, which jointly analyzes data acquired with diffusion encoding applied in a single direction at a time (linear tensor encoding, LTE) and in all directions (spherical tensor encoding, STE). We then contrast the two approaches by comparing neurite density estimated using NODDI with microscopic anisotropy estimated using CODIVIDE. Data were acquired in healthy volunteers and in glioma patients. NODDI and CODIVIDE differed the most in gray matter and in gliomas, where NODDI detected a neurite fraction higher than expected from the level of microscopic diffusion anisotropy found with CODIVIDE. The discrepancies could be explained by the NODDI tortuosity assumption, which enforces a connection between the neurite density and the mean diffusivity of tissue. Our results suggest that this assumption is invalid, which leads to a NODDI neurite density that is inconsistent between LTE and STE data. Using simulations, we demonstrate that the NODDI assumptions result in parameter bias that precludes the use of NODDI to map neurite density. With CODIVIDE, we found high levels of microscopic anisotropy in white matter

  16. Calculation of power density with MCNP in TRIGA reactor

    International Nuclear Information System (INIS)

    Snoj, L.; Ravnik, M.

    2006-01-01

    Modern Monte Carlo codes (e.g. MCNP) allow calculation of power density distribution in 3-D geometry assuming detailed geometry without unit-cell homogenization. To normalize MCNP calculation by the steady-state thermal power of a reactor, one must use appropriate scaling factors. The description of the scaling factors is not adequately described in the MCNP manual and requires detailed knowledge of the code model. As the application of MCNP for power density calculation in TRIGA reactors has not been reported in open literature, the procedure of calculating power density with MCNP and its normalization to the power level of a reactor is described in the paper. (author)

  17. A Box-Cox normal model for response times.

    Science.gov (United States)

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  18. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    Science.gov (United States)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  19. Proposed non-interferometric FIR electron density measuring scheme for Tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Dodel, G; Kunz, W [Stuttgart Univ. (TH) (Germany, F.R.). Inst. fuer Plasmaforschung

    1979-08-01

    Extension of FIR polarimetry to electron density measurements in Tokamaks is suggested as a possible alternative for devices in which FIR interferometry is not applicable or difficult to handle due to reduced accessibility or strong mechanical vibrations. The method is numerically simulated. The relative experimental simplicity compared with interferometry has to be paid for with symmetry assumptions which enter into the evaluation process.

  20. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    Directory of Open Access Journals (Sweden)

    Hazim Adnan Hashim

    2016-09-01

    Full Text Available The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led many individuals to build new kind of beliefs and assumptions about themselves and the world. Many writers have written about the human ordeals that followed this incident. Don DeLillo’s Falling Man reflects the traumatic repercussions of this disaster on Americans’ fundamental assumptions. The objective of this study is to examine the novel from the traumatic perspective that has afflicted the victims’ fundamental understandings of the world and the self. Individuals’ fundamental understandings could be changed or modified due to exposure to certain types of events like war, terrorism, political violence or even the sense of alienation. The Assumptive World theory of Ronnie Janoff-Bulman will be used as a framework to study the traumatic experience of the characters in Falling Man. The significance of the study lies in providing a new perception to the field of trauma that can help trauma victims to adopt alternative assumptions or reshape their previous ones to heal from traumatic effects.

  1. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  2. The Role of Policy Assumptions in Validating High-stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    L. Cronbach has made the point that for validity arguments to be convincing to diverse audiences, they need to be based on assumptions that are credible to these audiences. The interpretations and uses of high stakes test scores rely on a number of policy assumptions about what should be taught in schools, and more specifically, about the content…

  3. Induced supersolidity in a mixture of normal and hard-core bosons

    International Nuclear Information System (INIS)

    Mishra, Tapan; Das, B. P.; Pai, Ramesh V.

    2010-01-01

    We present a scenario where a supersolid is induced in one of the components of a mixture of two species bosonic atoms where there are no long-range interactions. We study a system of normal and hard-core boson mixture with only the former possessing long-range interactions. We consider three cases: the first where the total density is commensurate and the other two where it is incommensurate to the lattice. By suitable choices of the densities of normal and hard-core bosons and the interaction strengths between them, we predict that the charge density wave and the supersolid orders can be induced in the hard-core species as a result of the competing interatomic interactions.

  4. Impact of Hypertriglyceridemia on Carotid Stenosis Progression under Normal Low-Density Lipoprotein Cholesterol Levels.

    Science.gov (United States)

    Kitagami, Masayuki; Yasuda, Ryuta; Toma, Naoki; Shiba, Masato; Nampei, Mai; Yamamoto, Yoko; Nakatsuka, Yoshinari; Sakaida, Hiroshi; Suzuki, Hidenori

    2017-08-01

    Dyslipidemia is a well-known risk factor for carotid stenosis progression, but triglycerides have attracted little attention. The aim of this study was to assess if serum triglycerides affect progression of carotid stenosis in patients with well-controlled low-density lipoprotein cholesterol (LDL-C) levels. This is a retrospective study in a single hospital consisting of 71 Japanese patients with internal carotid artery stenosis greater than or equal to 50% and normal serum LDL-C levels who underwent angiographic examination with or without the resultant carotid artery stenting or endarterectomy from 2007 to 2011, and were subsequently followed up for 4 years. Clinical factors including fasting serum triglyceride values were compared between the progression (≥10% increase in degree of carotid stenosis on ultrasonography) and the nonprogression groups. During 4 years, 15 patients (21.1%) had carotid stenosis progression on either side. Cox regression analysis demonstrated that symptomatic cases (hazard ratio [HR], 4.327; P = .019), coexisting intracranial arteriosclerotic stenosis (HR, 5.341; P = .005), and hypertriglyceridemia (HR, 6.228; P = .011) were associated with subsequent progression of carotid stenosis. Kaplan-Meier plots demonstrated that the progression-free survival rate was significantly higher in patients without hypertriglyceridemia and intracranial arteriosclerotic stenosis at baseline. Among patients with moderate to severe carotid stenosis and well-controlled LDL-C, hypertriglyceridemia was an important risk factor for progression of carotid stenosis irrespective of surgical treatments. It would be worthwhile to test if triglyceride-lowering medications suppress carotid stenosis progression. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  5. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  6. Dlk1 in normal and abnormal hematopoiesis

    DEFF Research Database (Denmark)

    Sakajiri, S; O'kelly, J; Yin, D

    2005-01-01

    normals. Also, Dlk1 mRNA was elevated in mononuclear, low density bone marrow cells from 11/38 MDS patients, 5/11 AML M6 and 2/4 AML M7 samples. Furthermore, 5/6 erythroleukemia and 2/2 megakaryocytic leukemia cell lines highly expressed Dlk1 mRNA. Levels of Dlk1 mRNA markedly increased during...... (particularly M6, M7), and it appears to be associated with normal development of megakaryocytes and B cells....

  7. Inter-annual Variations in Snow/Firn Density over the Greenland Ice Sheet by Combining GRACE gravimetry and Envisat Altimetry

    Science.gov (United States)

    Su, X.; Shum, C. K.; Guo, J.; Howat, I.; Jezek, K. C.; Luo, Z.; Zhou, Z.

    2017-12-01

    Satellite altimetry has been used to monitor elevation and volume change of polar ice sheets since the 1990s. In order to derive mass change from the measured volume change, different density assumptions are commonly used in the research community, which may cause discrepancies on accurately estimating ice sheets mass balance. In this study, we investigate the inter-annual anomalies of mass change from GRACE gravimetry and elevation change from Envisat altimetry during years 2003-2009, with the objective of determining inter-annual variations of snow/firn density over the Greenland ice sheet (GrIS). High positive correlations (0.6 or higher) between these two inter-annual anomalies at are found over 93% of the GrIS, which suggests that both techniques detect the same geophysical process at the inter-annual timescale. Interpreting the two anomalies in terms of near surface density variations, over 80% of the GrIS, the inter-annual variation in average density is between the densities of snow and pure ice. In particular, at the Summit of Central Greenland, we validate the satellite data estimated density with the in situ data available from 75 snow pits and 9 ice cores. This study provides constraints on the currently applied density assumptions for the GrIS.

  8. Statistical properties of kinetic and total energy densities in reverberant spaces

    DEFF Research Database (Denmark)

    Jacobsen, Finn; Molares, Alfonso Rodriguez

    2010-01-01

    Many acoustical measurements, e.g., measurement of sound power and transmission loss, rely on determining the total sound energy in a reverberation room. The total energy is usually approximated by measuring the mean-square pressure (i.e., the potential energy density) at a number of discrete....... With the advent of a three-dimensional particle velocity transducer, it has become somewhat easier to measure total rather than only potential energy density in a sound field. This paper examines the ensemble statistics of kinetic and total sound energy densities in reverberant enclosures theoretically...... positions. The idea of measuring the total energy density instead of the potential energy density on the assumption that the former quantity varies less with position than the latter goes back to the 1930s. However, the phenomenon was not analyzed until the late 1970s and then only for the region of high...

  9. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  10. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    OpenAIRE

    Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin

    2008-01-01

    http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...

  11. Influence of tracks densities in solid state nuclear track detectors

    International Nuclear Information System (INIS)

    Guedes O, S.; Hadler N.; Lunes, P.; Saenz T, C.

    1996-01-01

    When Solid State Nuclear Track Detectors (SSNTD) is employed to measure nuclear tracks produced mainly by fission fragments and alpha particles, it is considered that the tracks observation work is performed under an efficiency, ε 0 , which is independent of the track density (number of tracks/area unit). There are not published results or experimental data supporting such an assumption. In this work the dependence of ε 0 with track density is studied basing on experimental data. To perform this, pieces of CR-39 cut from a sole 'mother sheet' were coupled to thin uranium films for different exposition times and the resulting ratios between track density and exposition time were compared. Our results indicate that ε 0 is constant for track densities between 10 3 and 10 5 cm -2 . At our etching conditions track overlapping makes impossible the counting for densities around 1.7 x 10 5 cm -2 . For track densities less than 10 3 cm -2 , ε 0 , was not observed to be constant. (authors). 4 refs., 2 figs

  12. Constraining Saturn's interior density profile from precision gravity field measurement obtained during Grand Finale

    Science.gov (United States)

    Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.

    2017-12-01

    The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly

  13. Outcomes of bone density measurements in coeliac disease.

    Science.gov (United States)

    Bolland, Mark J; Grey, Andrew; Rowbotham, David S

    2016-01-29

    Some guidelines recommend that patients with newly diagnosed coeliac disease undergo bone density scanning. We assessed the bone density results in a cohort of patients with coeliac disease. We searched bone density reports over two 5-year periods in all patients from Auckland District Health Board (2008-12) and in patients under 65 years from Counties Manukau District Health Board (2009-13) for the term 'coeliac.' Reports for 137 adults listed coeliac disease as an indication for bone densitometry. The average age was 47 years, body mass index (BMI) 25 kg/m(2), and 77% were female. The median time between coeliac disease diagnosis and bone densitometry was 261 days. The average bone density Z-score was slightly lower than expected (Z-score -0.3 to 0.4) at the lumbar spine, total hip and femoral neck, but 88-93% of Z-scores at each site lay within the normal range. Low bone density was strongly related to BMI: the proportions with Z-score 30 kg/m(2) were 28%, 15%, 6% and 0% respectively. Average bone density was normal, suggesting that bone density measurement is not indicated routinely in coeliac disease, but could be considered on a case-by-case basis for individuals with strong risk factors for fracture.

  14. CRISS power spectral density

    International Nuclear Information System (INIS)

    Vaeth, W.

    1979-04-01

    The correlation of signal components at different frequencies like higher harmonics cannot be detected by a normal power spectral density measurement, since this technique correlates only components at the same frequency. This paper describes a special method for measuring the correlation of two signal components at different frequencies: the CRISS power spectral density. From this new function in frequency analysis, the correlation of two components can be determined quantitatively either they stem from one signal or from two diverse signals. The principle of the method, suitable for the higher harmonics of a signal as well as for any other frequency combinations is shown for the digital frequency analysis technique. Two examples of CRISS power spectral densities demonstrates the operation of the new method. (orig.) [de

  15. Density structures inside the plasmasphere: Cluster observations

    DEFF Research Database (Denmark)

    Darrouzet, F.; Decreau, P.M.E.; De Keyser, J.

    2004-01-01

    The electron density profiles derived from the EFW and WHISPER instruments on board the four Cluster spacecraft reveal density structures inside the plasmasphere and at its outer boundary, the plasmapause. We have conducted a statistical study to characterize these density structures. We focus...... on the plasmasphere crossing on I I April 2002, during which Cluster observed several density irregularities inside the plasmasphere, as well as a plasmaspheric plume. We derive the density gradient vectors from simultaneous density measurements by the four spacecraft. We also determine the normal velocity...... of the boundaries of the plume and of the irregularities from the time delays between those boundaries in the four individual density profiles, assuming they are planar. These new observations yield novel insights about the occurrence of density irregularities, their geometry and their dynamics. These in...

  16. Elastic reflection waveform inversion with variable density

    KAUST Repository

    Li, Yuanyuan

    2017-08-17

    Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion (RWI) provides a method to build a good background model, which can serve as an initial model for elastic FWI. Therefore, we introduce the concept of RWI for elastic media, and propose elastic RWI with variable density. We apply Born modeling to generate the synthetic reflection data by using optimized perturbations of P- and S-wave velocities and density. The inversion for the perturbations in P- and S-wave velocities and density is similar to elastic least-squares reverse time migration (LSRTM). An incorrect initial model will lead to some misfits at the far offsets of reflections; thus, can be utilized to update the background velocity. We optimize the perturbation and background models in a nested approach. Numerical tests on the Marmousi model demonstrate that our method is able to build reasonably good background models for elastic FWI with absence of low frequencies, and it can deal with the variable density, which is needed in real cases.

  17. Molecular conformational analysis, vibrational spectra and normal coordinate analysis of trans-1,2-bis(3,5-dimethoxy phenyl)-ethene based on density functional theory calculations.

    Science.gov (United States)

    Joseph, Lynnette; Sajan, D; Chaitanya, K; Isac, Jayakumary

    2014-03-25

    The conformational behavior and structural stability of trans-1,2-bis(3,5-dimethoxy phenyl)-ethene (TDBE) were investigated by using density functional theory (DFT) method with the B3LYP/6-311++G(d,p) basis set combination. The vibrational wavenumbers of TDBE were computed at DFT level and complete vibrational assignments were made on the basis of normal coordinate analysis calculations (NCA). The DFT force field transformed to natural internal coordinates was corrected by a well-established set of scale factors that were found to be transferable to the title compound. The infrared and Raman spectra were also predicted from the calculated intensities. The observed Fourier transform infrared (FTIR) and Fourier transform (FT) Raman vibrational wavenumbers were analyzed and compared with the theoretically predicted vibrational spectra. Comparison of the simulated spectra with the experimental spectra provides important information about the ability of the computational method to describe the vibrational modes. Information about the size, shape, charge density distribution and site of chemical reactivity of the molecules has been obtained by mapping electron density isosurface with electrostatic potential surfaces (ESP). Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Recovering Risk-Neutral Densities from Brazilian Interest Rate Options

    Directory of Open Access Journals (Sweden)

    José Renato Haas Ornelas

    2011-03-01

    Full Text Available Building Risk-Neutral Density (RND from options data is one useful way for extracting market expectations about a financial variable. For a sample of IDI (Brazilian Interbank Deposit Rate Index options from 1998 to 2009, this paper estimates the option-implied Risk-Neutral Densities for the Brazilian short rate using three methods: Shimko, Mixture of Two Log-Normals and Generalized Beta of Second Kind. Our in-sample goodness-of-fit evaluation shows that the Mixture of Log-Normals method provides better fitting to option’s data than the other two methods. The shape of log-normal distributions seems to fit well to the mean-reversal dynamics of Brazilian interest rates. We have also calculated the RND implied Skewness, showing how it could have provided market early-warning signals of the monetary policy outcomes in 2002 and 2003. Overall, Risk-Neutral Densities implied on IDI options showed to be a useful tool for extracting market expectations about future outcomes of the monetary policy.

  19. Normal CT characteristics of the thymus in adults

    Energy Technology Data Exchange (ETDEWEB)

    Simanovsky, Natalia, E-mail: natalias@hadassah.org.il [Department of Medical Imaging, Hadassah - Hebrew University Medical Center, Jerusalem (Israel); Hiller, Nurith; Loubashevsky, Natali; Rozovsky, Katya [Department of Medical Imaging, Hadassah - Hebrew University Medical Center, Jerusalem (Israel)

    2012-11-15

    Background: The thymus changes with age. Its shape and the proportion of solid tissue and fat vary between individuals, yet there is no comprehensive work describing the size and morphology of the normal thymus on CT. As a result, many adults with some preserved soft tissue in the thymus may undergo extensive work-up to exclude mediastinal tumor. Our aim was to quantify CT characteristics of the normal thymus in an adult population. Methods: CT chest scans of 194 trauma patients aged 14-78 years (mean 52.6 years), were retrospectively reviewed. The density, volume, shape and predominant side of the thymus were recorded for 56 patients in whom some solid tissue was preserved. Statistical analysis of these characteristics according to the patient age and gender was performed. Results: Thymic density and volume decreased progressively with age. No solid tissue component was seen in the thymus in patients older than 54 years. In the majority of patients, the thymus had an arrowhead shape, with middle position. However, great variability in thymic shape and border were noted. There was a highly significant relationship between density and patient age (p < 0.0001). Conclusion: We hope that our work will help in the definition of normal thymic CT parameters in adults, help to prevent unnecessary and expensive imaging procedures, and reduce patient exposure to ionizing radiation.

  20. Normal CT characteristics of the thymus in adults

    International Nuclear Information System (INIS)

    Simanovsky, Natalia; Hiller, Nurith; Loubashevsky, Natali; Rozovsky, Katya

    2012-01-01

    Background: The thymus changes with age. Its shape and the proportion of solid tissue and fat vary between individuals, yet there is no comprehensive work describing the size and morphology of the normal thymus on CT. As a result, many adults with some preserved soft tissue in the thymus may undergo extensive work-up to exclude mediastinal tumor. Our aim was to quantify CT characteristics of the normal thymus in an adult population. Methods: CT chest scans of 194 trauma patients aged 14–78 years (mean 52.6 years), were retrospectively reviewed. The density, volume, shape and predominant side of the thymus were recorded for 56 patients in whom some solid tissue was preserved. Statistical analysis of these characteristics according to the patient age and gender was performed. Results: Thymic density and volume decreased progressively with age. No solid tissue component was seen in the thymus in patients older than 54 years. In the majority of patients, the thymus had an arrowhead shape, with middle position. However, great variability in thymic shape and border were noted. There was a highly significant relationship between density and patient age (p < 0.0001). Conclusion: We hope that our work will help in the definition of normal thymic CT parameters in adults, help to prevent unnecessary and expensive imaging procedures, and reduce patient exposure to ionizing radiation.

  1. Still rethinking the value of high wood density.

    Science.gov (United States)

    Larjavaara, Markku; Muller-Landau, Helene C

    2012-01-01

    In a previous paper, we questioned the traditional interpretation of the advantages and disadvantages of high wood density (Functional Ecology 24: 701-705). Niklas and Spatz (American Journal of Botany 97: 1587-1594) challenged the biomechanical relevance of studying properties of dry wood, including dry wood density, and stated that we erred in our claims regarding scaling. We first present the full derivation of our previous claims regarding scaling. We then examine how the fresh modulus of rupture and the elastic modulus scale with dry wood density and compare these scaling relationships with those for dry mechanical properties, using almost exactly the same data set analyzed by Niklas and Spatz. The derivation shows that given our assumptions that the modulus of rupture and elastic modulus are both proportional to wood density, the resistance to bending is inversely proportional to wood density and strength is inversely proportional with the square root of wood density, exactly as we previously claimed. The analyses show that the elastic modulus of fresh wood scales proportionally with wood density (exponent 1.05, 95% CI 0.90-1.11) but that the modulus of rupture of fresh wood does not, scaling instead with the 1.25 power of wood density (CI 1.18-1.31). The deviation from proportional scaling for modulus of rupture is so small that our central conclusion remains correct: for a given construction cost, trees with lower wood density have higher strength and higher resistance to bending.

  2. Normalization of natural gas composition data measured by gas chromatography

    International Nuclear Information System (INIS)

    Milton, Martin J T; Harris, Peter M; Brown, Andrew S; Cowper, Chris J

    2009-01-01

    The composition of natural gas determined by gas chromatography is routinely used as the basis for calculating physico-chemical properties of the gas. Since the data measured by gas chromatography have particular statistical properties, the methods used to determine the composition can make use of a priori assumptions about the statistical model for the data. We discuss a generalized approach to determining the composition, and show that there are particular statistical models for the data for which the generalized approach reduces to the widely used method of post-normalization. We also show that the post-normalization approach provides reasonable estimates of the composition for cases where it cannot be shown to arise rigorously from the statistical structure of the data

  3. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  4. On the normalization of the minimum free energy of RNAs by sequence length.

    Science.gov (United States)

    Trotta, Edoardo

    2014-01-01

    The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.

  5. False assumptions.

    Science.gov (United States)

    Swaminathan, M

    1997-01-01

    Indian women do not have to be told the benefits of breast feeding or "rescued from the clutches of wicked multinational companies" by international agencies. There is no proof that breast feeding has declined in India; in fact, a 1987 survey revealed that 98% of Indian women breast feed. Efforts to promote breast feeding among the middle classes rely on such initiatives as the "baby friendly" hospital where breast feeding is promoted immediately after birth. This ignores the 76% of Indian women who give birth at home. Blaming this unproved decline in breast feeding on multinational companies distracts attention from more far-reaching and intractable effects of social change. While the Infant Milk Substitutes Act is helpful, it also deflects attention from more pressing issues. Another false assumption is that Indian women are abandoning breast feeding to comply with the demands of employment, but research indicates that most women give up employment for breast feeding, despite the economic cost to their families. Women also seek work in the informal sector to secure the flexibility to meet their child care responsibilities. Instead of being concerned about "teaching" women what they already know about the benefits of breast feeding, efforts should be made to remove the constraints women face as a result of their multiple roles and to empower them with the support of families, governmental policies and legislation, employers, health professionals, and the media.

  6. Volume-controlled histographic analysis of pulmonary parenchyma in normal and diffuse parenchymal lung disease: a pilot study

    International Nuclear Information System (INIS)

    Park, Hyo Yong; Lee, Jongmin; Kim, Jong Seob; Won, Chyl Ho; Kang, Duk Sik; Kim, Myoung Nam

    2000-01-01

    To evaluate the clinical usefulness of a home-made histographic analysis system using a lung volume controller. Our study involved ten healthy volunteers, ten emphysema patients, and two idiopathic pulmonary fibrosis (IPF) patients. Using a home-made lung volume controller, images were obtained in the upper, middle, and lower lung zones at 70%, 50%, and 20% of vital capacity. Electron beam tomography was used and scanning parameters were single slice mode, 10-mm slice thickness, 0.4-second scan time, and 35-cm field of view. Usinga home-made semi-automated program, pulmonary parenchyma was isolated and a histogrm then obtained. Seven histographic parameters, namely mean density (MD), density at maximal frequency (DMF), maximal ascending gradient (MAG),maximal ascending gradient density (MAGD), maximal sescending gradient (MDG), maximal descending gradient density (MDGD), and full width at half maximum (FWHM) were derived from the histogram. We compared normal controls with abnormal groups including emphysema and IPF patients at the same respiration levels. A normal histographic zone with ± 1 standard deviation was obtained. Histographic curves of normal controls shifted toward the high density level, and the width of the normal zone increased as the level of inspiration decreased. In ten normal controls, MD, DMF, MAG, MAGD, MDG, MDGD, and FWHM readings at a 70% inspiration level were lower than those at 20% (p less than0.05). At the same level of inspiration, histograms of emphysema patients were locatedat a lower density area than those of normal controls. As inspiration status decreased, histograms of emphysema patients showed diminished shift compared with those of normal controls. At 50% and 20% inspiration levels, the MD, DMF, and MAGD readings of emphysema patients were significantly lower than those of normal controls (p less than 0.05). Compared with those of normal controls, histogrms of the two IPF patients obtained at three inspiration levels were

  7. Volume-controlled histographic analysis of pulmonary parenchyma in normal and diffuse parenchymal lung disease: a pilot study

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyo Yong; Lee, Jongmin; Kim, Jong Seob; Won, Chyl Ho; Kang, Duk Sik [School of Medicine, Kyungpook National University, Taegu (Korea, Republic of); Kim, Myoung Nam [The University of Iowa (United States)

    2000-06-01

    To evaluate the clinical usefulness of a home-made histographic analysis system using a lung volume controller. Our study involved ten healthy volunteers, ten emphysema patients, and two idiopathic pulmonary fibrosis (IPF) patients. Using a home-made lung volume controller, images were obtained in the upper, middle, and lower lung zones at 70%, 50%, and 20% of vital capacity. Electron beam tomography was used and scanning parameters were single slice mode, 10-mm slice thickness, 0.4-second scan time, and 35-cm field of view. Usinga home-made semi-automated program, pulmonary parenchyma was isolated and a histogrm then obtained. Seven histographic parameters, namely mean density (MD), density at maximal frequency (DMF), maximal ascending gradient (MAG),maximal ascending gradient density (MAGD), maximal sescending gradient (MDG), maximal descending gradient density (MDGD), and full width at half maximum (FWHM) were derived from the histogram. We compared normal controls with abnormal groups including emphysema and IPF patients at the same respiration levels. A normal histographic zone with {+-} 1 standard deviation was obtained. Histographic curves of normal controls shifted toward the high density level, and the width of the normal zone increased as the level of inspiration decreased. In ten normal controls, MD, DMF, MAG, MAGD, MDG, MDGD, and FWHM readings at a 70% inspiration level were lower than those at 20% (p less than0.05). At the same level of inspiration, histograms of emphysema patients were locatedat a lower density area than those of normal controls. As inspiration status decreased, histograms of emphysema patients showed diminished shift compared with those of normal controls. At 50% and 20% inspiration levels, the MD, DMF, and MAGD readings of emphysema patients were significantly lower than those of normal controls (p less than 0.05). Compared with those of normal controls, histogrms of the two IPF patients obtained at three inspiration levels were

  8. 7 CFR 1980.476 - Transfer and assumptions.

    Science.gov (United States)

    2010-01-01

    ...-354 449-30 to recover its pro rata share of the actual loss at that time. In completing Form FmHA or... the lender on liquidations and property management. A. The State Director may approve all transfer and... Director will notify the Finance Office of all approved transfer and assumption cases on Form FmHA or its...

  9. Temperature- and density-dependent x-ray scattering in a low-Z plasma

    International Nuclear Information System (INIS)

    Brown, R.T.

    1976-06-01

    A computer program is described which calculates temperature- and density-dependent differential and total coherent and incoherent x-ray scattering cross sections for a low-Z scattering medium. Temperature and density are arbitrary within the limitations of the validity of local thermodynamic equilbrium, since ionic populations are calculated under this assumption. Scattering cross sections are calculated in the form factor approximation. The scattering medium may consist of any mixure of elements with Z less than or equal to 8, with this limitation imposed by the availability of atomic data

  10. α-Defensins Induce a Post-translational Modification of Low Density Lipoprotein (LDL) That Promotes Atherosclerosis at Normal Levels of Plasma Cholesterol.

    Science.gov (United States)

    Abu-Fanne, Rami; Maraga, Emad; Abd-Elrahman, Ihab; Hankin, Aviel; Blum, Galia; Abdeen, Suhair; Hijazi, Nuha; Cines, Douglas B; Higazi, Abd Al-Roof

    2016-02-05

    Approximately one-half of the patients who develop clinical atherosclerosis have normal or only modest elevations in plasma lipids, indicating that additional mechanisms contribute to pathogenesis. In view of increasing evidence that inflammation contributes to atherogenesis, we studied the effect of human neutrophil α-defensins on low density lipoprotein (LDL) trafficking, metabolism, vascular deposition, and atherogenesis using transgenic mice expressing human α-defensins in their polymorphonuclear leukocytes (Def(+/+)). Accelerated Def(+/+) mice developed α-defensin·LDL complexes that accelerate the clearance of LDL from the circulation accompanied by enhanced vascular deposition and retention of LDL, induction of endothelial cathepsins, increased endothelial permeability to LDL, and the development of lipid streaks in the aortic roots when fed a regular diet and at normal plasma levels of LDL. Transplantation of bone marrow from Def(+/+) to WT mice increased LDL clearance, increased vascular permeability, and increased vascular deposition of LDL, whereas transplantation of WT bone marrow to Def(+/+) mice prevented these outcomes. The same outcome was obtained by treating Def(+/+) mice with colchicine to inhibit the release of α-defensins. These studies identify a potential new link between inflammation and the development of atherosclerosis. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  12. High hydrostatic pressure specifically affects molecular dynamics and shape of low-density lipoprotein particles

    Science.gov (United States)

    Golub, M.; Lehofer, B.; Martinez, N.; Ollivier, J.; Kohlbrecher, J.; Prassl, R.; Peters, J.

    2017-04-01

    Lipid composition of human low-density lipoprotein (LDL) and its physicochemical characteristics are relevant for proper functioning of lipid transport in the blood circulation. To explore dynamical and structural features of LDL particles with either a normal or a triglyceride-rich lipid composition we combined coherent and incoherent neutron scattering methods. The investigations were carried out under high hydrostatic pressure (HHP), which is a versatile tool to study the physicochemical behavior of biomolecules in solution at a molecular level. Within both neutron techniques we applied HHP to probe the shape and degree of freedom of the possible motions (within the time windows of 15 and 100 ps) and consequently the flexibility of LDL particles. We found that HHP does not change the types of motion in LDL, but influences the portion of motions participating. Contrary to our assumption that lipoprotein particles, like membranes, are highly sensitive to pressure we determined that LDL copes surprisingly well with high pressure conditions, although the lipid composition, particularly the triglyceride content of the particles, impacts the molecular dynamics and shape arrangement of LDL under pressure.

  13. Interface Input/Output Automata: Splitting Assumptions from Guarantees

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Nyman, Ulrik; Wasowski, Andrzej

    2006-01-01

    's \\IOAs [11], relying on a context dependent notion of refinement based on relativized language inclusion. There are two main contributions of the work. First, we explicitly separate assumptions from guarantees, increasing the modeling power of the specification language and demonstrating an interesting...

  14. Constraints on the cosmological relativistic energy density

    International Nuclear Information System (INIS)

    Zentner, Andrew R.; Walker, Terry P.

    2002-01-01

    We discuss bounds on the cosmological relativistic energy density as a function of redshift, reviewing the big bang nucleosynthesis and cosmic microwave background bounds, updating bounds from large scale structure, and introducing a new bound from the magnitude-redshift relation for type Ia supernovae. We conclude that the standard and well-motivated assumption that relativistic energy is negligible during recent epochs is not necessitated by extant data. We then demonstrate the utility of these bounds by constraining the mass and lifetime of a hypothetical massive big bang relic particle

  15. Truncation scheme of time-dependent density-matrix approach II

    Energy Technology Data Exchange (ETDEWEB)

    Tohyama, Mitsuru [Kyorin University School of Medicine, Mitaka, Tokyo (Japan); Schuck, Peter [Institut de Physique Nucleaire, IN2P3-CNRS, Universite Paris-Sud, Orsay (France); Laboratoire de Physique et de Modelisation des Milieux Condenses, CNRS et Universite Joseph Fourier, Grenoble (France)

    2017-09-15

    A truncation scheme of the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy for reduced density matrices, where a three-body density matrix is approximated by two-body density matrices, is improved to take into account a normalization effect. The truncation scheme is tested for the Lipkin model. It is shown that the obtained results are in good agreement with the exact solutions. (orig.)

  16. Formation and disintegration of high-density nuclear matter in heavy-ion collisions

    International Nuclear Information System (INIS)

    Kitazoe, Yasuhiro; Matsuoka, Kazuo; Sano, Mitsuo

    1976-01-01

    The formation of high-density nuclear matter which may be expected to be attained in high-energy heavy-ion collisions and the subsequent disintegration of dense matter are investigated by means of the hydrodynamics. Head-on collisions of identical nuclei are considered in the nonrelativistic approximation. The compressed density cannot exceed 4 times of the normal one so long as the freedom of only nucleons is considered, and can become higher than 4 times when other freedoms such as the productions of mesons and also nucleon isobars are additionally taken into account. The angular distributions for ejected particles predominate both forwards and backwards at low collision energies, corresponding to the formation of nuclear density less than 2 times of the normal density and become isotropic at the point of 2 times of the normal one. As the collision energy increases further, lateral ejection is intensified gradually. (auth.)

  17. Geomagnetic polarity epochs: age and duration of the olduvai normal polarity event

    Science.gov (United States)

    Gromme, C.S.; Hay, R.L.

    1971-01-01

    New data show that the Olduvai normal geomagnetic polarity event is represented in Olduvai Gorge, Tanzania, by rocks covering a time span of roughly from 0.1 to 0.2 my and is no older than 2.0 my. Hence the long normal polarity event of this age that is seen in deep-sea sediment cores and in magnetic profiles over oceanic ridges should be called the Olduvai event. The lava from which the Gilsa?? event was defined may have been erupted during the Olduvai event and, if so, the term Gilsa?? should now be abandoned. Many dated lavas that were originally assigned to the Olduvai event represent one or two much shorter normal polarity events that preceded the Olduvai event; these are herein named the Re??union normal polarity events. This revision brings the geomagnetic reversal time scale into conformity with the one implied by assumptions of uniform sedimentation rates on the ocean floor and uniform rates of sea-floor spreading. ?? 1971.

  18. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    Science.gov (United States)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  19. Modelling sexual transmission of HIV: testing the assumptions, validating the predictions

    Science.gov (United States)

    Baggaley, Rebecca F.; Fraser, Christophe

    2010-01-01

    Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600

  20. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  1. Efficient pseudorandom generators based on the DDH assumption

    NARCIS (Netherlands)

    Rezaeian Farashahi, R.; Schoenmakers, B.; Sidorenko, A.; Okamoto, T.; Wang, X.

    2007-01-01

    A family of pseudorandom generators based on the decisional Diffie-Hellman assumption is proposed. The new construction is a modified and generalized version of the Dual Elliptic Curve generator proposed by Barker and Kelsey. Although the original Dual Elliptic Curve generator is shown to be

  2. Bouguer density analysis using nettleton method at Banten NPP site

    International Nuclear Information System (INIS)

    Yuliastuti; Hadi Suntoko; Yarianto SBS

    2017-01-01

    Sub-surface information become crucial in determining a feasible NPP site that safe from external hazards. Gravity survey which result as density information, is essential to understand the sub-surface structure. Nevertheless, overcorrected or under corrected will lead to a false interpretation. Therefore, density correction in term of near-surface average density or Bouguer density is necessary to be calculated. The objective of this paper is to estimate and analyze Bouguer density using Nettleton method at Banten NPP Site. Methodology used in this paper is Nettleton method that applied in three different slices (A-B, A-C and A-D) with density assumption range between 1700 and 3300 kg/m"3. Nettleton method is based on minimum correlation between gravity anomaly and topography to determine density correction. The result shows that slice A-B which covers rough topography difference, Nettleton method failed. While using the other two slices, Nettleton method yield with a different density value, 2700 kg/m"3 for A-C and 2300 kg/m"3 for A-D. A-C provides the lowest correlation value which represents the Upper Banten tuff and Gede Mt. volcanic rocks in accordance with Quartenary rocks exist in the studied area. (author)

  3. On the normalization of the minimum free energy of RNAs by sequence length.

    Directory of Open Access Journals (Sweden)

    Edoardo Trotta

    Full Text Available The minimum free energy (MFE of ribonucleic acids (RNAs increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.

  4. Contemporary assumptions on human nature and work and approach to human potential managing

    Directory of Open Access Journals (Sweden)

    Vujić Dobrila

    2006-01-01

    Full Text Available A general problem of this research is to identify if there is a relationship between the assumption on human nature and work (Mcgregor, Argyris, Schein, Steers and Porter and a general organizational model preference, as well as a mechanism of human resource management? This research was carried out in 2005/2006. The sample consisted of 317 subjects (197 managers, 105 highly educated subordinates and 15 entrepreneurs in 7 big enterprises in a group of small business enterprises differentiating in terms of the entrepreneur’s structure and a type of activity. A general hypothesis "that assumptions on human nature and work are statistically significant in connection to the preference approach (models, of work motivation commitment", has been confirmed. A specific hypothesis have been also confirmed: ·The assumptions on a human as a rational economic being are statistically significant in correlation with only two mechanisms of traditional models, the mechanism of method work control and the working discipline mechanism. ·Statistically significant assumptions on a human as a social being are correlated with all mechanisms of engaging employees, which belong to the model of the human relations, except the mechanism introducing the adequate type of prizes for all employees independently of working results. ·The assumptions on a human as a creative being are statistically significant, positively correlating with preference of two mechanisms belonging to the human resource model by investing into education and training and making conditions for the application of knowledge and skills. The young with assumptions on a human as a creative being prefer much broader repertoire of mechanisms belonging to the human resources model from the remaining category of subjects in the pattern. The connection between the assumption on human nature and preference models of engaging appears especially in the sub-pattern of managers, in the category of young subjects

  5. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  6. CORONAL DENSITY STRUCTURE AND ITS ROLE IN WAVE DAMPING IN LOOPS

    Energy Technology Data Exchange (ETDEWEB)

    Cargill, P. J. [Space and Atmospheric Physics, The Blackett Laboratory, Imperial College, London SW7 2BW (United Kingdom); De Moortel, I.; Kiddie, G., E-mail: p.cargill@imperial.ac.uk [School of Mathematics and Statistics, University of St Andrews, St Andrews, Scotland KY16 9SS (United Kingdom)

    2016-05-20

    It has long been established that gradients in the Alfvén speed, and in particular the plasma density, are an essential part of the damping of waves in the magnetically closed solar corona by mechanisms such as resonant absorption and phase mixing. While models of wave damping often assume a fixed density gradient, in this paper the self-consistency of such calculations is assessed by examining the temporal evolution of the coronal density. It is shown conceptually that for some coronal structures, density gradients can evolve in a way that the wave-damping processes are inhibited. For the case of phase mixing we argue that (a) wave heating cannot sustain the assumed density structure and (b) inclusion of feedback of the heating on the density gradient can lead to a highly structured density, although on long timescales. In addition, transport coefficients well in excess of classical are required to maintain the observed coronal density. Hence, the heating of closed coronal structures by global oscillations may face problems arising from the assumption of a fixed density gradient, and the rapid damping of oscillations may have to be accompanied by a separate (non-wave-based) heating mechanism to sustain the required density structuring.

  7. Suppression of cholesterol synthesis in cultured fibroblasts from a patient with homozygous familial hypercholesterolemia by her own low density lipoprotein density fraction. A possible role of apolipoprotein E

    NARCIS (Netherlands)

    Havekes, L.; Vermeer, B.J.; Wit, E. de

    1980-01-01

    The suppression of cellular cholesterol synthesis by low density lipoprotein (LDL) from a normal and from a homozygous familial hypercholesterolemic subject was measured on normal fibroblasts and on fibroblasts derived from the same homozygous familial hypercholesterolemic patient. On normal

  8. Pion condensation and density isomerism in nuclear matter

    International Nuclear Information System (INIS)

    Hecking, P.; Weise, W.

    1979-01-01

    The possible existence of density isomers in nuclear matter, induced by pion condensation, is discussed; the nuclear equation of state is treated within the framework of the sigma model. Repulsive short-range baryon-baryon correlations, the admixture of Δ (1232) isobars and finite-range pion-baryon vertex form factors are taken into account. The strong dependence of density isomerism on the high density extrapolation of the equation of state for normal nuclear matter is also investigated. We find that, once finite range pion-baryon vertices are introduced, the appearance of density isomers becomes unlikely

  9. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    Science.gov (United States)

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Science.gov (United States)

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  11. Validity of the mockwitness paradigm: testing the assumptions.

    Science.gov (United States)

    McQuiston, Dawn E; Malpass, Roy S

    2002-08-01

    Mockwitness identifications are used to provide a quantitative measure of lineup fairness. Some theoretical and practical assumptions of this paradigm have not been studied in terms of mockwitnesses' decision processes and procedural variation (e.g., instructions, lineup presentation method), and the current experiment was conducted to empirically evaluate these assumptions. Four hundred and eighty mockwitnesses were given physical information about a culprit, received 1 of 4 variations of lineup instructions, and were asked to identify the culprit from either a fair or unfair sequential lineup containing 1 of 2 targets. Lineup bias estimates varied as a result of lineup fairness and the target presented. Mockwitnesses generally reported that the target's physical description was their main source of identifying information. Our findings support the use of mockwitness identifications as a useful technique for sequential lineup evaluation, but only for mockwitnesses who selected only 1 lineup member. Recommendations for the use of this evaluation procedure are discussed.

  12. Androgens in women with anorexia nervosa and normal-weight women with hypothalamic amenorrhea.

    Science.gov (United States)

    Miller, K K; Lawson, E A; Mathur, V; Wexler, T L; Meenaghan, E; Misra, M; Herzog, D B; Klibanski, A

    2007-04-01

    Anorexia nervosa and normal-weight hypothalamic amenorrhea are characterized by hypogonadism and hypercortisolemia. However, it is not known whether these endocrine abnormalities result in reductions in adrenal and/ or ovarian androgens or androgen precursors in such women, nor is it known whether relative androgen deficiency contributes to abnormalities in bone density and body composition in this population. Our objective was to determine whether endogenous androgen and dehydroepiandrosterone sulfate (DHEAS) levels: 1) are reduced in women with anorexia nervosa and normal-weight hypothalamic amenorrhea, 2) are reduced further by oral contraceptives in women with anorexia nervosa, and 3) are predictors of weight, body composition, or bone density in such women. We conducted a cross-sectional study at a general clinical research center. A total of 217 women were studied: 137 women with anorexia nervosa not receiving oral contraceptives, 32 women with anorexia nervosa receiving oral contraceptives, 21 normal-weight women with hypothalamic amenorrhea, and 27 healthy eumenorrheic controls. Testosterone, free testosterone, DHEAS, bone density, fat-free mass, and fat mass were assessed. Endogenous total and free testosterone, but not DHEAS, were lower in women with anorexia nervosa than in controls. More marked reductions in both free testosterone and DHEAS were observed in women with anorexia nervosa receiving oral contraceptives. In contrast, normal-weight women with hypothalamic amenorrhea had normal androgen and DHEAS levels. Lower free testosterone, total testosterone, and DHEAS levels predicted lower bone density at most skeletal sites measured, and free testosterone was positively associated with fat-free mass. Androgen levels are low, appear to be even further reduced by oral contraceptive use, and are predictors of bone density and fat-free mass in women with anorexia nervosa. Interventional studies are needed to confirm these findings and determine whether

  13. Educational Technology as a Subversive Activity: Questioning Assumptions Related to Teaching and Leading with Technology

    Science.gov (United States)

    Kruger-Ross, Matthew J.; Holcomb, Lori B.

    2012-01-01

    The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…

  14. Adaptive Convergence Rates of a Dirichlet Process Mixture of Multivariate Normals

    OpenAIRE

    Tokdar, Surya T.

    2011-01-01

    It is shown that a simple Dirichlet process mixture of multivariate normals offers Bayesian density estimation with adaptive posterior convergence rates. Toward this, a novel sieve for non-parametric mixture densities is explored, and its rate adaptability to various smoothness classes of densities in arbitrary dimension is demonstrated. This sieve construction is expected to offer a substantial technical advancement in studying Bayesian non-parametric mixture models based on stick-breaking p...

  15. Child Development Knowledge and Teacher Preparation: Confronting Assumptions.

    Science.gov (United States)

    Katz, Lilian G.

    This paper questions the widely held assumption that acquiring knowledge of child development is an essential part of teacher preparation and teaching competence, especially among teachers of young children. After discussing the influence of culture, parenting style, and teaching style on developmental expectations and outcomes, the paper asserts…

  16. High baryon density from relativistic heavy ion collisions

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Y.; Kahana, S.H. [Brookhaven National Lab., Upton, NY (United States); Schlagel, T.J. [Brookhaven National Lab., Upton, NY (United States)]|[State Univ. of New York, Stony Brook, NY (United States)

    1993-10-01

    A quantitative model, based on hadronic physics, is developed and applied to heavy ion collisions at BNL-AGS energies. This model is in excellent agreement with observed particle spectra in heavy ion collisions using Si beams, where baryon densities of three and four times the normal nuclear matter density ({rho}{sub 0}) are reached. For Au on Au collisions, the authors predict the formation of matter at very high densities (up to 10 {rho}{sub 0}).

  17. Patterns of brain structural connectivity differentiate normal weight from overweight subjects.

    Science.gov (United States)

    Gupta, Arpana; Mayer, Emeran A; Sanmiguel, Claudia P; Van Horn, John D; Woodworth, Davis; Ellingson, Benjamin M; Fling, Connor; Love, Aubrey; Tillisch, Kirsten; Labus, Jennifer S

    2015-01-01

    Alterations in the hedonic component of ingestive behaviors have been implicated as a possible risk factor in the pathophysiology of overweight and obese individuals. Neuroimaging evidence from individuals with increasing body mass index suggests structural, functional, and neurochemical alterations in the extended reward network and associated networks. To apply a multivariate pattern analysis to distinguish normal weight and overweight subjects based on gray and white-matter measurements. Structural images (N = 120, overweight N = 63) and diffusion tensor images (DTI) (N = 60, overweight N = 30) were obtained from healthy control subjects. For the total sample the mean age for the overweight group (females = 32, males = 31) was 28.77 years (SD = 9.76) and for the normal weight group (females = 32, males = 25) was 27.13 years (SD = 9.62). Regional segmentation and parcellation of the brain images was performed using Freesurfer. Deterministic tractography was performed to measure the normalized fiber density between regions. A multivariate pattern analysis approach was used to examine whether brain measures can distinguish overweight from normal weight individuals. 1. White-matter classification: The classification algorithm, based on 2 signatures with 17 regional connections, achieved 97% accuracy in discriminating overweight individuals from normal weight individuals. For both brain signatures, greater connectivity as indexed by increased fiber density was observed in overweight compared to normal weight between the reward network regions and regions of the executive control, emotional arousal, and somatosensory networks. In contrast, the opposite pattern (decreased fiber density) was found between ventromedial prefrontal cortex and the anterior insula, and between thalamus and executive control network regions. 2. Gray-matter classification: The classification algorithm, based on 2 signatures with 42 morphological features, achieved 69

  18. Patterns of brain structural connectivity differentiate normal weight from overweight subjects

    Science.gov (United States)

    Gupta, Arpana; Mayer, Emeran A.; Sanmiguel, Claudia P.; Van Horn, John D.; Woodworth, Davis; Ellingson, Benjamin M.; Fling, Connor; Love, Aubrey; Tillisch, Kirsten; Labus, Jennifer S.

    2015-01-01

    Background Alterations in the hedonic component of ingestive behaviors have been implicated as a possible risk factor in the pathophysiology of overweight and obese individuals. Neuroimaging evidence from individuals with increasing body mass index suggests structural, functional, and neurochemical alterations in the extended reward network and associated networks. Aim To apply a multivariate pattern analysis to distinguish normal weight and overweight subjects based on gray and white-matter measurements. Methods Structural images (N = 120, overweight N = 63) and diffusion tensor images (DTI) (N = 60, overweight N = 30) were obtained from healthy control subjects. For the total sample the mean age for the overweight group (females = 32, males = 31) was 28.77 years (SD = 9.76) and for the normal weight group (females = 32, males = 25) was 27.13 years (SD = 9.62). Regional segmentation and parcellation of the brain images was performed using Freesurfer. Deterministic tractography was performed to measure the normalized fiber density between regions. A multivariate pattern analysis approach was used to examine whether brain measures can distinguish overweight from normal weight individuals. Results 1. White-matter classification: The classification algorithm, based on 2 signatures with 17 regional connections, achieved 97% accuracy in discriminating overweight individuals from normal weight individuals. For both brain signatures, greater connectivity as indexed by increased fiber density was observed in overweight compared to normal weight between the reward network regions and regions of the executive control, emotional arousal, and somatosensory networks. In contrast, the opposite pattern (decreased fiber density) was found between ventromedial prefrontal cortex and the anterior insula, and between thalamus and executive control network regions. 2. Gray-matter classification: The classification algorithm, based on 2 signatures with 42

  19. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  20. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  1. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    Science.gov (United States)

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…

  2. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  3. Analysis and experimental study on hydraulic balance characteristics in density lock

    International Nuclear Information System (INIS)

    Gu Haifeng; Yan Changqi; Sun Furong

    2009-01-01

    Through the simplified theoretical model, the hydraulic balance condition which should be met in the density lock is obtained, when reactor operates normally and density lock is closed. The main parameters influencing this condition are analyzed, and the results show that the hydraulic balance in the density lock is characterized with self-stability in a certain range. Meantime, a simulating experimental loop is built and experimental verification on the self-stability characteristic is done. Moreover, experimental study is done on the conditions of flow change of work fluids in the primary circuit in the process of stable operations. The experimental results show that the hydraulic balance in the density lock can recovered quickly, depending on the self-stability characteristic without influences on the sealing performance of density lock and normal operation of reactor, after the change of operation parameters breaks the hydraulic balance. (authors)

  4. Analysis of a Dynamic Viscoelastic Contact Problem with Normal Compliance, Normal Damped Response, and Nonmonotone Slip Rate Dependent Friction

    Directory of Open Access Journals (Sweden)

    Mikaël Barboteu

    2016-01-01

    Full Text Available We consider a mathematical model which describes the dynamic evolution of a viscoelastic body in frictional contact with an obstacle. The contact is modelled with a combination of a normal compliance and a normal damped response law associated with a slip rate-dependent version of Coulomb’s law of dry friction. We derive a variational formulation and an existence and uniqueness result of the weak solution of the problem is presented. Next, we introduce a fully discrete approximation of the variational problem based on a finite element method and on an implicit time integration scheme. We study this fully discrete approximation schemes and bound the errors of the approximate solutions. Under regularity assumptions imposed on the exact solution, optimal order error estimates are derived for the fully discrete solution. Finally, after recalling the solution of the frictional contact problem, some numerical simulations are provided in order to illustrate both the behavior of the solution related to the frictional contact conditions and the theoretical error estimate result.

  5. Dialogic or Dialectic? The Significance of Ontological Assumptions in Research on Educational Dialogue

    Science.gov (United States)

    Wegerif, Rupert

    2008-01-01

    This article explores the relationship between ontological assumptions and studies of educational dialogue through a focus on Bakhtin's "dialogic". The term dialogic is frequently appropriated to a modernist framework of assumptions, in particular the neo-Vygotskian or sociocultural tradition. However, Vygotsky's theory of education is dialectic,…

  6. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  7. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    ) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...

  8. Ultimate energy density of observable cold baryonic matter.

    Science.gov (United States)

    Lattimer, James M; Prakash, Madappa

    2005-03-25

    We demonstrate that the largest measured mass of a neutron star establishes an upper bound to the energy density of observable cold baryonic matter. An equation of state-independent expression satisfied by both normal neutron stars and self-bound quark matter stars is derived for the largest energy density of matter inside stars as a function of their masses. The largest observed mass sets the lowest upper limit to the density. Implications from existing and future neutron star mass measurements are discussed.

  9. The statistics of maxima in primordial density perturbations

    International Nuclear Information System (INIS)

    Peacock, J.A.; Heavens, A.F.

    1985-01-01

    An investigation has been made of the hypothesis that protogalaxies/protoclusters form at the sites of maxima in a primordial field of normally distributed density perturbations. Using a mixture of analytic and numerical techniques, the properties of the maxima, have been studied. The results provide a natural mechanism for biased galaxy formation in which galaxies do not necessarily follow the large-scale density. Methods for obtained the true autocorrelation function of the density field and implications for Microwave Background studies are discussed. (author)

  10. Robustness to non-normality of various tests for the one-sample location problem

    Directory of Open Access Journals (Sweden)

    Michelle K. McDougall

    2004-01-01

    Full Text Available This paper studies the effect of the normal distribution assumption on the power and size of the sign test, Wilcoxon's signed rank test and the t-test when used in one-sample location problems. Power functions for these tests under various skewness and kurtosis conditions are produced for several sample sizes from simulated data using the g-and-k distribution of MacGillivray and Cannon [5].

  11. The frequency-domain approach for apparent density mapping

    Science.gov (United States)

    Tong, T.; Guo, L.

    2017-12-01

    Apparent density mapping is a technique to estimate density distribution in the subsurface layer from the observed gravity data. It has been widely applied for geologic mapping, tectonic study and mineral exploration for decades. Apparent density mapping usually models the density layer as a collection of vertical, juxtaposed prisms in both horizontal directions, whose top and bottom surfaces are assumed to be horizontal or variable-depth, and then inverts or deconvolves the gravity anomalies to determine the density of each prism. Conventionally, the frequency-domain approach, which assumes that both top and bottom surfaces of the layer are horizontal, is usually utilized for fast density mapping. However, such assumption is not always valid in the real world, since either the top surface or the bottom surface may be variable-depth. Here, we presented a frequency-domain approach for apparent density mapping, which permits both the top and bottom surfaces of the layer to be variable-depth. We first derived the formula for forward calculation of gravity anomalies caused by the density layer, whose top and bottom surfaces are variable-depth, and the formula for inversion of gravity anomalies for the density distribution. Then we proposed the procedure for density mapping based on both the formulas of inversion and forward calculation. We tested the approach on the synthetic data, which verified its effectiveness. We also tested the approach on the real Bouguer gravity anomalies data from the central South China. The top surface was assumed to be flat and was on the sea level, and the bottom surface was considered as the Moho surface. The result presented the crustal density distribution, which was coinciding well with the basic tectonic features in the study area.

  12. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    Science.gov (United States)

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  13. Towards New Probabilistic Assumptions in Business Intelligence

    OpenAIRE

    Schumann Andrew; Szelc Andrzej

    2015-01-01

    One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot ...

  14. Assumption-versus data-based approaches to summarizing species' ranges.

    Science.gov (United States)

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2018-06-01

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  15. Increases in bone density during treatment of men with idiopathic hypogonadotropic hypogonadism

    Energy Technology Data Exchange (ETDEWEB)

    Finkelstein, J.S.; Klibanski, A.; Neer, R.M.; Doppelt, S.H.; Rosenthal, D.I.; Segre, G.V.; Crowley, W.F. Jr. (Massachusetts General Hospital, Boston (USA))

    1989-10-01

    To assess the effects of gonadal steroid replacement on bone density in men with osteoporosis due to severe hypogonadism, we measured cortical bone density in the distal radius by 125I photon absorptiometry and trabecular bone density in the lumbar spine by quantitative computed tomography in 21 men with isolated GnRH deficiency while serum testosterone levels were maintained in the normal adult male range for 12-31 months (mean +/- SE, 23.7 +/- 1.1). In men who initially had fused epiphyses (n = 15), cortical bone density increased from 0.71 +/- 0.02 to 0.74 +/- 0.01 g/cm2 (P less than 0.01), while trabecular bone density did not change (116 +/- 9 compared with 119 +/- 7 mg/cm3). In men who initially had open epiphyses (n = 6), cortical bone density increased from 0.62 +/- 0.01 to 0.70 +/- 0.03 g/cm2 (P less than 0.01), while trabecular bone density increased from 96 +/- 13 to 109 +/- 12 mg/cm3 (P less than 0.01). Cortical bone density increased 0.03 +/- 0.01 g/cm2 in men with fused epiphyses and 0.08 +/- 0.02 g/cm2 in men with open epiphyses (P less than 0.05). Despite these increases, neither cortical nor trabecular bone density returned to normal levels. Histomorphometric analyses of iliac crest bone biopsies demonstrated that most of the men had low turnover osteoporosis, although some men had normal to high turnover osteoporosis. We conclude that bone density increases during gonadal steroid replacement of GnRH-deficient men, particularly in men who are skeletally immature.

  16. Increases in bone density during treatment of men with idiopathic hypogonadotropic hypogonadism

    International Nuclear Information System (INIS)

    Finkelstein, J.S.; Klibanski, A.; Neer, R.M.; Doppelt, S.H.; Rosenthal, D.I.; Segre, G.V.; Crowley, W.F. Jr.

    1989-01-01

    To assess the effects of gonadal steroid replacement on bone density in men with osteoporosis due to severe hypogonadism, we measured cortical bone density in the distal radius by 125I photon absorptiometry and trabecular bone density in the lumbar spine by quantitative computed tomography in 21 men with isolated GnRH deficiency while serum testosterone levels were maintained in the normal adult male range for 12-31 months (mean +/- SE, 23.7 +/- 1.1). In men who initially had fused epiphyses (n = 15), cortical bone density increased from 0.71 +/- 0.02 to 0.74 +/- 0.01 g/cm2 (P less than 0.01), while trabecular bone density did not change (116 +/- 9 compared with 119 +/- 7 mg/cm3). In men who initially had open epiphyses (n = 6), cortical bone density increased from 0.62 +/- 0.01 to 0.70 +/- 0.03 g/cm2 (P less than 0.01), while trabecular bone density increased from 96 +/- 13 to 109 +/- 12 mg/cm3 (P less than 0.01). Cortical bone density increased 0.03 +/- 0.01 g/cm2 in men with fused epiphyses and 0.08 +/- 0.02 g/cm2 in men with open epiphyses (P less than 0.05). Despite these increases, neither cortical nor trabecular bone density returned to normal levels. Histomorphometric analyses of iliac crest bone biopsies demonstrated that most of the men had low turnover osteoporosis, although some men had normal to high turnover osteoporosis. We conclude that bone density increases during gonadal steroid replacement of GnRH-deficient men, particularly in men who are skeletally immature

  17. Halo-Independent Direct Detection Analyses Without Mass Assumptions

    CERN Document Server

    Anderson, Adam J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the $m_\\chi-\\sigma_n$ plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the $v_{min}-\\tilde{g}$ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from $v_{min}$ to nuclear recoil momentum ($p_R$), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call $\\tilde{h}(p_R)$. The entire family of conventional halo-independent $\\tilde{g}(v_{min})$ plots for all DM masses are directly found from the single $\\tilde{h}(p_R)$ plot through a simple re...

  18. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Heterosexual assumptions in verbal and non-verbal communication in nursing.

    Science.gov (United States)

    Röndahl, Gerd; Innala, Sune; Carlsson, Marianne

    2006-11-01

    This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they

  20. Is this the right normalization? A diagnostic tool for ChIP-seq normalization.

    Science.gov (United States)

    Angelini, Claudia; Heller, Ruth; Volkinshtein, Rita; Yekutieli, Daniel

    2015-05-09

    Chip-seq experiments are becoming a standard approach for genome-wide profiling protein-DNA interactions, such as detecting transcription factor binding sites, histone modification marks and RNA Polymerase II occupancy. However, when comparing a ChIP sample versus a control sample, such as Input DNA, normalization procedures have to be applied in order to remove experimental source of biases. Despite the substantial impact that the choice of the normalization method can have on the results of a ChIP-seq data analysis, their assessment is not fully explored in the literature. In particular, there are no diagnostic tools that show whether the applied normalization is indeed appropriate for the data being analyzed. In this work we propose a novel diagnostic tool to examine the appropriateness of the estimated normalization procedure. By plotting the empirical densities of log relative risks in bins of equal read count, along with the estimated normalization constant, after logarithmic transformation, the researcher is able to assess the appropriateness of the estimated normalization constant. We use the diagnostic plot to evaluate the appropriateness of the estimates obtained by CisGenome, NCIS and CCAT on several real data examples. Moreover, we show the impact that the choice of the normalization constant can have on standard tools for peak calling such as MACS or SICER. Finally, we propose a novel procedure for controlling the FDR using sample swapping. This procedure makes use of the estimated normalization constant in order to gain power over the naive choice of constant (used in MACS and SICER), which is the ratio of the total number of reads in the ChIP and Input samples. Linear normalization approaches aim to estimate a scale factor, r, to adjust for different sequencing depths when comparing ChIP versus Input samples. The estimated scaling factor can easily be incorporated in many peak caller algorithms to improve the accuracy of the peak identification. The

  1. The PDF of fluid particle acceleration in turbulent flow with underlying normal distribution of velocity fluctuations

    International Nuclear Information System (INIS)

    Aringazin, A.K.; Mazhitov, M.I.

    2003-01-01

    We describe a formal procedure to obtain and specify the general form of a marginal distribution for the Lagrangian acceleration of fluid particle in developed turbulent flow using Langevin type equation and the assumption that velocity fluctuation u follows a normal distribution with zero mean, in accord to the Heisenberg-Yaglom picture. For a particular representation, β=exp[u], of the fluctuating parameter β, we reproduce the underlying log-normal distribution and the associated marginal distribution, which was found to be in a very good agreement with the new experimental data by Crawford, Mordant, and Bodenschatz on the acceleration statistics. We discuss on arising possibilities to make refinements of the log-normal model

  2. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    Science.gov (United States)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  3. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  4. Observing gravitational-wave transient GW150914 with minimal assumptions

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. C.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brocki, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderon Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. R.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; de Haas, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijhunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, R.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prolchorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shithriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlhruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, R. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be

  5. Normalized inverse characterization of sound absorbing rigid porous media.

    Science.gov (United States)

    Zieliński, Tomasz G

    2015-06-01

    This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.

  6. THE DARK MATTER DENSITY PROFILE OF THE FORNAX DWARF

    International Nuclear Information System (INIS)

    Jardel, John R.; Gebhardt, Karl

    2012-01-01

    We construct axisymmetric Schwarzschild models to measure the mass profile of the Local Group dwarf galaxy Fornax. These models require no assumptions to be made about the orbital anisotropy of the stars, as is the case for commonly used Jeans models. We test a variety of parameterizations of dark matter density profiles and find cored models with uniform density ρ c = (1.6 ± 0.1) × 10 –2 M ☉ pc –3 fit significantly better than the cuspy halos predicted by cold dark matter simulations. We also construct models with an intermediate-mass black hole, but are unable to make a detection. We place a 1σ upper limit on the mass of a potential intermediate-mass black hole at M . ≤ 3.2 × 10 4 M ☉ .

  7. Comet Giacobini-Zinner - a normal comet?

    International Nuclear Information System (INIS)

    Cochran, A.L.; Barker, E.S.

    1987-01-01

    Observations of Comet Giacobini-Zinner were obtained during its 1985 apparition using an IDS spectrograph at McDonald Observatory. Column densities and production rates were computed. The production rates were compared to observations of other normal comets. Giacobini-Zinner is shown to be depleted in C2 and C3 relative to CN. These production rates are down by a factor of 5. 12 references

  8. Oil price assumptions in macroeconomic forecasts: should we follow future market expectations?

    International Nuclear Information System (INIS)

    Coimbra, C.; Esteves, P.S.

    2004-01-01

    In macroeconomic forecasting, in spite of its important role in price and activity developments, oil prices are usually taken as an exogenous variable, for which assumptions have to be made. This paper evaluates the forecasting performance of futures market prices against the other popular technical procedure, the carry-over assumption. The results suggest that there is almost no difference between opting for futures market prices or using the carry-over assumption for short-term forecasting horizons (up to 12 months), while, for longer-term horizons, they favour the use of futures market prices. However, as futures market prices reflect market expectations for world economic activity, futures oil prices should be adjusted whenever market expectations for world economic growth are different to the values underlying the macroeconomic scenarios, in order to fully ensure the internal consistency of those scenarios. (Author)

  9. The Weight of Euro Coins: Its Distribution Might Not Be as Normal as You Would Expect

    Science.gov (United States)

    Shkedy, Ziv; Aerts, Marc; Callaert, Herman

    2006-01-01

    Classical regression models, ANOVA models and linear mixed models are just three examples (out of many) in which the normal distribution of the response is an essential assumption of the model. In this paper we use a dataset of 2000 euro coins containing information (up to the milligram) about the weight of each coin, to illustrate that the…

  10. The 'revealed preferences' theory: Assumptions and conjectures

    International Nuclear Information System (INIS)

    Green, C.H.

    1983-01-01

    Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de

  11. Analysis On Political Speech Of Susilo Bambang Yudhoyono: Common Sense Assumption And Ideology

    Directory of Open Access Journals (Sweden)

    Sayit Abdul Karim

    2015-10-01

    Full Text Available This paper presents an analysis on political speech of Susilo Bambang Yudhoyono (SBY, the former president of Indonesia at the Indonesian conference on “Moving towards sustainability: together we must create the future we want”. Ideologies are closely linked to power and language because using language is the commonest form of social behavior, and the form of social behavior where we rely most on ‘common-sense’ assumptions. The objectives of this study are to discuss the common sense assumption and ideology by means of language use in SBY’s political speech which is mainly grounded in Norman Fairclough’s theory of language and power in critical discourse analysis. There are two main problems of analysis, namely; first, what are the common sense assumption and ideology in Susilo Bambang Yudhoyono’s political speech; and second, how do they relate to each other in the political discourse? The data used in this study was in the form of written text on “moving towards sustainability: together we must create the future we want”. A qualitative descriptive analysis was employed to analyze the common sense assumption and ideology in the written text of Susilo Bambang Yudhoyono’s political speech which was delivered at Riocto entro Convention Center, Rio de Janeiro on June 20, 2012. One dimension of ‘common sense’ is the meaning of words. The results showed that the common sense assumption and ideology conveyed through SBY’s specific words or expressions can significantly explain how political discourse is constructed and affected by the SBY’s rule and position, life experience, and power relations. He used language as a powerful social tool to present his common sense assumption and ideology to convince his audiences and fellow citizens that the future of sustainability has been an important agenda for all people.

  12. Temperature Dependence Viscosity and Density of Different Biodiesel Blends

    Directory of Open Access Journals (Sweden)

    Vojtěch Kumbár

    2015-01-01

    Full Text Available The main goal of this paper is to assess the effect of rapeseed oil methyl ester (RME concentration in diesel fuel on its viscosity and density behaviour. The density and dynamic viscosity were observed at various mixing ratios of RME and diesel fuel. All measurements were performed at constant temperature of 40 °C. Increasing ratio of RME in diesel fuel was reflected in increased density value and dynamic viscosity of the blend. In case of pure RME, pure diesel fuel, and a blend of both (B30, temperature dependence of dynamic viscosity and density was examined. Temperature range in the experiment was −10 °C to 80 °C. Considerable temperature dependence of dynamic viscosity and density was found and demonstrated for all three samples. This finding is in accordance with theoretical assumptions and reference data. Mathematical models were developed and tested. Temperature dependence of dynamic viscosity was modeled using a polynomial 3rd polynomial degree. Correlation coefficients R −0.796, −0.948, and −0.974 between measured and calculated values were found. Temperature dependence of density was modeled using a 2nd polynomial degree. Correlation coefficients R −0.994, −0.979, and −0.976 between measured and calculated values were acquired. The proposed models can be used for flow behaviour prediction of RME, diesel fuel, and their blends.

  13. Normalization of Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.

    2011-01-01

    Unlike the uniform density spherical shell approximations of Newton, the con- sequence of spaceflight in the real universe is that gravitational fields are sensitive to the nonsphericity of their generating central bodies. The gravitational potential of a nonspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities which must be removed in order to generalize the method and solve for any possible orbit, including polar orbits. Three unique algorithms have been developed to eliminate these singularities by Samuel Pines [1], Bill Lear [2], and Robert Gottlieb [3]. This paper documents the methodical normalization of two1 of the three known formulations for singularity-free gravitational acceleration (namely, the Lear [2] and Gottlieb [3] algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre Polynomials and ALFs for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  14. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...

  15. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    Science.gov (United States)

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  16. Reduction of electron density in a plasma by injection of liquids

    Science.gov (United States)

    Sodha, M. S.; Evans, J. S.

    1974-01-01

    In this paper, the authors have investigated the physics of various processes relevant to the reduction of electron density in a plasma by addition of water droplets; two processes have in particular been analyzed in some detail, viz, the electron attachment to charged dielectric droplets and the emission of negative ions by vaporization from these droplets. The results of these analyses have been applied to a study of the kinetics of reduction of electron density and charging of droplets in an initially overionized plasma, after addition of water droplets. A number of simplifying assumptions including uniform size and charge on droplets and negligible change in the radius of the droplet due to evaporation have been made.

  17. Is phonology bypassed in normal or dyslexic development?

    Science.gov (United States)

    Pennington, B F; Lefly, D L; Van Orden, G C; Bookman, M O; Smith, S D

    1987-01-01

    A pervasive assumption in most accounts of normal reading and spelling development is that phonological coding is important early in development but is subsequently superseded by faster, orthographic coding which bypasses phonology. We call this assumption, which derives from dual process theory, the developmental bypass hypothesis. The present study tests four specific predictions of the developmental bypass hypothesis by comparing dyslexics and nondyslexics from the same families in a cross-sectional design. The four predictions are: 1) That phonological coding skill develops early in normal readers and soon reaches asymptote, whereas orthographic coding skill has a protracted course of development; 2) that the correlation of adult reading or spelling performance with phonological coding skill is considerably less than the correlation with orthographic coding skill; 3) that dyslexics who are mainly deficient in phonological coding skill should be able to bypass this deficit and eventually close the gap in reading and spelling performance; and 4) that the greatest differences between dyslexics and developmental controls on measures of phonological coding skill should be observed early rather than late in development.None of the four predictions of the developmental bypass hypothesis were upheld. Phonological coding skill continued to develop in nondyslexics until adulthood. It accounted for a substantial (32-53 percent) portion of the variance in reading and spelling performance in adult nondyslexics, whereas orthographic coding skill did not account for a statistically reliable portion of this variance. The dyslexics differed little across age in phonological coding skill, but made linear progress in orthographic coding skill, surpassing spelling-age (SA) controls by adulthood. Nonetheless, they didnot close the gap in reading and spelling performance. Finally, dyslexics were significantly worse than SA (and Reading Age [RA]) controls in phonological coding skill

  18. The Velocity of Density: Can We Build More Sustainable Cities Fast Enough?

    Directory of Open Access Journals (Sweden)

    Markus Moos

    2017-12-01

    Full Text Available Urban planners now commonly advocate for increases in density of the built environment to reduce car dependence and enhance the sustainability of cities. The analysis in this paper asks about the speed at which density as a sustainability policy can be implemented. The Greater Toronto Hamilton Area (GTHA is used as a case study to measure how quickly existing areas could be densified to meet minimum transit supportive density thresholds. Almost 70% of existing residents live in neighborhoods with densities below minimum transit supportive densities. The findings show that increases in minimum densities could be attained roughly within the target time horizon of existing growth plans, but that these increases hinge on assumptions of continuing high growth rates. The sustainability of cities relies on a high ‘velocity of density’, a term proposed in the paper to refer to the speed at which density can be implemented. Density is often slowed or halted by local residents, which could prove problematic if sustainability objectives require speedy implementation, for instance to address climate change. Analysis of the velocity of density suggests that planning for sustainability, and climate change, in cities would benefit from considering a broader set of solutions to car dependence in existing low-density areas than changes to the density of the built form alone.

  19. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    Directory of Open Access Journals (Sweden)

    Alexander Richard Braczkowski

    Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  20. Potential misuse of avian density as a conservation metric

    Science.gov (United States)

    Skagen, Susan K.; Yackel Adams, Amy A.

    2011-01-01

    Effective conservation metrics are needed to evaluate the success of management in a rapidly changing world. Reproductive rates and densities of breeding birds (as a surrogate for reproductive rate) have been used to indicate the quality of avian breeding habitat, but the underlying assumptions of these metrics rarely have been examined. When birds are attracted to breeding areas in part by the presence of conspecifics and when breeding in groups influences predation rates, the effectiveness of density and reproductive rate as indicators of habitat quality is reduced. It is beneficial to clearly distinguish between individual- and population-level processes when evaluating habitat quality. We use the term reproductive rate to refer to both levels and further distinguish among levels by using the terms per capita fecundity (number of female offspring per female per year, individual level) and population growth rate (the product of density and per capita fecundity, population level). We predicted how density and reproductive rate interact over time under density-independent and density-dependent scenarios, assuming the ideal free distribution model of how birds settle in breeding habitats. We predicted population density of small populations would be correlated positively with both per capita fecundity and population growth rate due to the Allee effect. For populations in the density-dependent growth phase, we predicted no relation between density and per capita fecundity (because individuals in all patches will equilibrate to the same success rate) and a positive relation between density and population growth rate. Several ecological theories collectively suggest that positive correlations between density and per capita fecundity would be difficult to detect. We constructed a decision tree to guide interpretation of positive, neutral, nonlinear, and negative relations between density and reproductive rates at individual and population levels. ?? 2010 Society for

  1. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    OpenAIRE

    Hazim Adnan Hashim; Rosli Bin Talif; Lina Hameed Ali

    2016-01-01

    The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led man...

  2. Whistler wave trapping in a density crest

    International Nuclear Information System (INIS)

    Sugai, H.; Niki, H.; Inutake, M.; Takeda, S.

    1979-11-01

    The linear trapping process of whistler waves in a field-aligned density crest is investigated theoretically and experimentally below ω = ωsub(c)/2 (half gyrofrequency). The conditions of the crest trapping are derived in terms of the frequency ω/ωsub(c), the incident wave-normal angle theta sub(i), and the density ratio n sub(i)/n sub(o), where n sub(i) and n sub(o) denote the density at the incident point and that at the ridge, respectively. The oscillation length of the trapped ray path is calculated for a parabolic density profile. The experiment on antenna-excited whistler wave has been performed in a large magnetized plasma with the density crest. The phase and amplitude profile of the whistler wave is measured along and across the crest. The measurement has verified characteristic behaviors of the crest trapping. (author)

  3. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results...... from different LCA studies can be consistent. This paper is an attempt to identify, review and analyse methodologies and technical assumptions used in various parts of selected waste LCA models. Several criteria were identified, which could have significant impacts on the results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...

  4. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    Rose, Jeremy; Aaen, Ivan; Nielsen, Peter Axel

    2008-01-01

    Thinking about improving the management of software development in software firms is dominated by one approach: the capability maturity model devised and administered at the Software Engineering Institute at Carnegie Mellon University. Though CMM, and its replacement CMMI are widely known and used...... thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...

  5. Commentary: Considering Assumptions in Associations Between Music Preferences and Empathy-Related Responding

    Directory of Open Access Journals (Sweden)

    Susan A O'Neill

    2015-09-01

    Full Text Available This commentary considers some of the assumptions underpinning the study by Clark and Giacomantonio (2015. Their exploratory study examined relationships between young people's music preferences and their cognitive and affective empathy-related responses. First, the prescriptive assumption that music preferences can be measured according to how often an individual listens to a particular music genre is considered within axiology or value theory as a multidimensional construct (general, specific, and functional values. This is followed by a consideration of the causal assumption that if we increase young people's empathy through exposure to prosocial song lyrics this will increase their prosocial behavior. It is suggested that the predictive power of musical preferences on empathy-related responding might benefit from a consideration of the larger pattern of psychological and subjective wellbeing within the context of developmental regulation across ontogeny that involves mutually influential individual—context relations.

  6. New Assumptions to Guide SETI Research

    Science.gov (United States)

    Colombano, S. P.

    2018-01-01

    The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.

  7. Characterization of Yellow Seahorse Hippocampus kuda feeding click sound signals in a laboratory environment: an application of probability density function and power spectral density analyses

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Saran, A.K.; Kuncolienker, D.S.; Sreepada, R.A.; Haris, K.; Fernandes, W.A

    based on the assumption of combinations of normal / Gaussian distributions indicate well fitted multimodal curves generated using MATLAB (Math Works Inc 2005) programs. Out of the twenty three clicks, four clicks (two clicks of 16 and 18cm male... Society of America 115: 2331-2333. Malamud BD, Turcotte DL. 1999. Self affine time series: measures of weak and strong persistence. Journal of Statistical Planning and Inference 80:173-196. MATLAB, Curve Fitting toolbox, Math Works Inc 2005. Available...

  8. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    Science.gov (United States)

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  10. Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)

    Science.gov (United States)

    Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi

    2017-03-01

    The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.

  11. Measuring Productivity Change without Neoclassical Assumptions: A Conceptual Analysis

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2008-01-01

    textabstractThe measurement of productivity change (or difference) is usually based on models that make use of strong assumptions such as competitive behaviour and constant returns to scale. This survey discusses the basics of productivity measurement and shows that one can dispense with most if not

  12. Exploring five common assumptions on Attention Deficit Hyperactivity Disorder

    NARCIS (Netherlands)

    Batstra, Laura; Nieweg, Edo H.; Hadders-Algra, Mijna

    The number of children diagnosed with attention deficit hyperactivity disorder (ADHD) and treated with medication is steadily increasing. The aim of this paper was to critically discuss five debatable assumptions on ADHD that may explain these trends to some extent. These are that ADHD (i) causes

  13. The quotient of normal random variables and application to asset price fat tails

    Science.gov (United States)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  14. Comparisons between a new point kernel-based scheme and the infinite plane source assumption method for radiation calculation of deposited airborne radionuclides from nuclear power plants.

    Science.gov (United States)

    Zhang, Xiaole; Efthimiou, George; Wang, Yan; Huang, Meng

    2018-04-01

    Radiation from the deposited radionuclides is indispensable information for environmental impact assessment of nuclear power plants and emergency management during nuclear accidents. Ground shine estimation is related to multiple physical processes, including atmospheric dispersion, deposition, soil and air radiation shielding. It still remains unclear that whether the normally adopted "infinite plane" source assumption for the ground shine calculation is accurate enough, especially for the area with highly heterogeneous deposition distribution near the release point. In this study, a new ground shine calculation scheme, which accounts for both the spatial deposition distribution and the properties of air and soil layers, is developed based on point kernel method. Two sets of "detector-centered" grids are proposed and optimized for both the deposition and radiation calculations to better simulate the results measured by the detectors, which will be beneficial for the applications such as source term estimation. The evaluation against the available data of Monte Carlo methods in the literature indicates that the errors of the new scheme are within 5% for the key radionuclides in nuclear accidents. The comparisons between the new scheme and "infinite plane" assumption indicate that the assumption is tenable (relative errors within 20%) for the area located 1 km away from the release source. Within 1 km range, the assumption mainly causes errors for wet deposition and the errors are independent of rain intensities. The results suggest that the new scheme should be adopted if the detectors are within 1 km from the source under the stable atmosphere (classes E and F), or the detectors are within 500 m under slightly unstable (class C) or neutral (class D) atmosphere. Otherwise, the infinite plane assumption is reasonable since the relative errors induced by this assumption are within 20%. The results here are only based on theoretical investigations. They should

  15. Departures from local thermodynamic equilibrium in cutting arc plasmas derived from electron and gas density measurements using a two-wavelength quantitative Schlieren technique

    International Nuclear Information System (INIS)

    Prevosto, L.; Mancinelli, B.; Artana, G.; Kelly, H.

    2011-01-01

    A two-wavelength quantitative Schlieren technique that allows inferring the electron and gas densities of axisymmetric arc plasmas without imposing any assumption regarding statistical equilibrium models is reported. This technique was applied to the study of local thermodynamic equilibrium (LTE) departures within the core of a 30 A high-energy density cutting arc. In order to derive the electron and heavy particle temperatures from the inferred density profiles, a generalized two-temperature Saha equation together with the plasma equation of state and the quasineutrality condition were employed. Factors such as arc fluctuations that influence the accuracy of the measurements and the validity of the assumptions used to derive the plasma species temperature were considered. Significant deviations from chemical equilibrium as well as kinetic equilibrium were found at elevated electron temperatures and gas densities toward the arc core edge. An electron temperature profile nearly constant through the arc core with a value of about 14000-15000 K, well decoupled from the heavy particle temperature of about 1500 K at the arc core edge, was inferred.

  16. Midplane neutral density profiles in the National Spherical Torus Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Stotler, D. P., E-mail: dstotler@pppl.gov; Bell, R. E.; Diallo, A.; LeBlanc, B. P.; Podestà, M.; Roquemore, A. L.; Ross, P. W. [Princeton Plasma Physics Laboratory, Princeton University, P. O. Box 451, Princeton, New Jersey 08543-0451 (United States); Scotti, F. [Lawrence Livermore National Laboratory, Livermore, California 94551 (United States)

    2015-08-15

    Atomic and molecular density data in the outer midplane of NSTX [Ono et al., Nucl. Fusion 40, 557 (2000)] are inferred from tangential camera data via a forward modeling procedure using the DEGAS 2 Monte Carlo neutral transport code. The observed Balmer-β light emission data from 17 shots during the 2010 NSTX campaign display no obvious trends with discharge parameters such as the divertor Balmer-α emission level or edge deuterium ion density. Simulations of 12 time slices in 7 of these discharges produce molecular densities near the vacuum vessel wall of 2–8 × 10{sup 17 }m{sup −3} and atomic densities ranging from 1 to 7 × 10{sup 16 }m{sup −3}; neither has a clear correlation with other parameters. Validation of the technique, begun in an earlier publication, is continued with an assessment of the sensitivity of the simulated camera image and neutral densities to uncertainties in the data input to the model. The simulated camera image is sensitive to the plasma profiles and virtually nothing else. The neutral densities at the vessel wall depend most strongly on the spatial distribution of the source; simulations with a localized neutral source yield densities within a factor of two of the baseline, uniform source, case. The uncertainties in the neutral densities associated with other model inputs and assumptions are ≤50%.

  17. Exploring the relationship between population density and maternal health coverage

    Directory of Open Access Journals (Sweden)

    Hanlon Michael

    2012-11-01

    Full Text Available Abstract Background Delivering health services to dense populations is more practical than to dispersed populations, other factors constant. This engenders the hypothesis that population density positively affects coverage rates of health services. This hypothesis has been tested indirectly for some services at a local level, but not at a national level. Methods We use cross-sectional data to conduct cross-country, OLS regressions at the national level to estimate the relationship between population density and maternal health coverage. We separately estimate the effect of two measures of density on three population-level coverage rates (6 tests in total. Our coverage indicators are the fraction of the maternal population completing four antenatal care visits and the utilization rates of both skilled birth attendants and in-facility delivery. The first density metric we use is the percentage of a population living in an urban area. The second metric, which we denote as a density score, is a relative ranking of countries by population density. The score’s calculation discounts a nation’s uninhabited territory under the assumption those areas are irrelevant to service delivery. Results We find significantly positive relationships between our maternal health indicators and density measures. On average, a one-unit increase in our density score is equivalent to a 0.2% increase in coverage rates. Conclusions Countries with dispersed populations face higher burdens to achieve multinational coverage targets such as the United Nations’ Millennial Development Goals.

  18. Implicit Assumptions in Special Education Policy: Promoting Full Inclusion for Students with Learning Disabilities

    Science.gov (United States)

    Kirby, Moira

    2017-01-01

    Introduction: Everyday millions of students in the United States receive special education services. Special education is an institution shaped by societal norms. Inherent in these norms are implicit assumptions regarding disability and the nature of special education services. The two dominant implicit assumptions evident in the American…

  19. The relationship between fission track length and track density in apatite

    International Nuclear Information System (INIS)

    Laslett, G.M.; Gleadow, A.J.W.; Duddy, I.R.

    1984-01-01

    Fission track dating is based upon an age equation derived from a random line segment model for fission tracks. This equation contains the implicit assumption of a proportional relationship between the true mean length of fission tracks and their track density in an isotropic medium. Previous experimental investigation of this relationship for both spontaneous and induced tracks in apatite during progressive annealment model in an obvious fashion. Corrected equations relating track length and density for apatite, an anisotropic mineral, show that the proportionality in this case is between track density and a length factor which is a generalization of the mean track length combining the actual length and crystallographic orientation of the track. This relationship has been experimentally confirmed for induced tracks in Durango apatite, taking into account bias in sampling of the track lengths, and the effect of the bulk etching velocity. (author)

  20. Normal modes of Bardeen discs

    International Nuclear Information System (INIS)

    Verdaguer, E.

    1983-01-01

    The short wavelength normal modes of self-gravitating rotating polytropic discs in the Bardeen approximation are studied. The discs' oscillations can be seen in terms of two types of modes: the p-modes whose driving forces are pressure forces and the r-modes driven by Coriolis forces. As a consequence of differential rotation coupling between the two takes place and some mixed modes appear, their properties can be studied under the assumption of weak coupling and it is seen that they avoid the crossing of the p- and r-modes. The short wavelength analysis provides a basis for the classification of the modes, which can be made by using the properties of their phase diagrams. The classification is applied to the large wavelength modes of differentially rotating discs with strong coupling and to a uniformly rotating sequence with no coupling, which have been calculated in previous papers. Many of the physical properties and qualitative features of these modes are revealed by the analysis. (author)

  1. Using partially labeled data for normal mixture identification with application to class definition

    Science.gov (United States)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.

  2. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...

  3. Effects of Electron Flow Current Density on Flow Impedance of Magnetically Insulated Transmission Lines

    International Nuclear Information System (INIS)

    He Yong; Zou Wen-Kang; Song Sheng-Yi

    2011-01-01

    In modern pulsed power systems, magnetically insulated transmission lines (MITLs) are used to couple power between the driver and the load. The circuit parameters of MITLs are well understood by employing the concept of flow impedance derived from Maxwell's equations and pressure balance across the flow. However, the electron density in an MITL is always taken as constant in the application of flow impedance. Thus effects of electron flow current density (product of electron density and drift velocity) in an MITL are neglected. We calculate the flow impedances of an MITL and compare them under three classical MITL theories, in which the electron density profile and electron flow current density are different from each other. It is found that the assumption of constant electron density profile in the calculation of the flow impedance is not always valid. The electron density profile and the electron flow current density have significant effects on flow impedance of the MITL. The details of the electron flow current density and its effects on the operation impedance of the MITL should be addressed more explicitly by experiments and theories in the future. (nuclear physics)

  4. On the assumption of vanishing temperature fluctuations at the wall for heat transfer modeling

    Science.gov (United States)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1993-01-01

    Boundary conditions for fluctuating wall temperature are required for near-wall heat transfer modeling. However, their correct specifications for arbitrary thermal boundary conditions are not clear. The conventional approach is to assume zero fluctuating wall temperature or zero gradient for the temperature variance at the wall. These are idealized specifications and the latter condition could lead to an ill posed problem for fully-developed pipe and channel flows. In this paper, the validity and extent of the zero fluctuating wall temperature condition for heat transfer calculations is examined. The approach taken is to assume a Taylor expansion in the wall normal coordinate for the fluctuating temperature that is general enough to account for both zero and non-zero value at the wall. Turbulent conductivity is calculated from the temperature variance and its dissipation rate. Heat transfer calculations assuming both zero and non-zero fluctuating wall temperature reveal that the zero fluctuating wall temperature assumption is in general valid. The effects of non-zero fluctuating wall temperature are limited only to a very small region near the wall.

  5. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    Science.gov (United States)

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  6. The electronic structure of normal metal-superconductor bilayers

    Energy Technology Data Exchange (ETDEWEB)

    Halterman, Klaus; Elson, J Merle [Sensor and Signal Sciences Division, Naval Air Warfare Center, China Lake, CA 93355 (United States)

    2003-09-03

    We study the electronic properties of ballistic thin normal metal-bulk superconductor heterojunctions by solving the Bogoliubov-de Gennes equations in the quasiclassical and microscopic 'exact' regimes. In particular, the significance of the proximity effect is examined through a series of self-consistent calculations of the space-dependent pair potential {delta}(r). It is found that self-consistency cannot be neglected for normal metal layer widths smaller than the superconducting coherence length {xi}{sub 0}, revealing its importance through discernible features in the subgap density of states. Furthermore, the exact self-consistent treatment yields a proximity-induced gap in the normal metal spectrum, which vanishes monotonically when the normal metal length exceeds {xi}{sub 0}. Through a careful analysis of the excitation spectra, we find that quasiparticle trajectories with wavevectors oriented mainly along the interface play a critical role in the destruction of the energy gap.

  7. Does Artificial Neural Network Support Connectivism's Assumptions?

    Science.gov (United States)

    AlDahdouh, Alaa A.

    2017-01-01

    Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement…

  8. Anti-Atheist Bias in the United States: Testing Two Critical Assumptions

    Directory of Open Access Journals (Sweden)

    Lawton K Swan

    2012-02-01

    Full Text Available Decades of opinion polling and empirical investigations have clearly demonstrated a pervasive anti-atheist prejudice in the United States. However, much of this scholarship relies on two critical and largely unaddressed assumptions: (a that when people report negative attitudes toward atheists, they do so because they are reacting specifically to their lack of belief in God; and (b that survey questions asking about attitudes toward atheists as a group yield reliable information about biases against individual atheist targets. To test these assumptions, an online survey asked a probability-based random sample of American adults (N = 618 to evaluate a fellow research participant (“Jordan”. Jordan garnered significantly more negative evaluations when identified as an atheist than when described as religious or when religiosity was not mentioned. This effect did not differ as a function of labeling (“atheist” versus “no belief in God”, or the amount of individuating information provided about Jordan. These data suggest that both assumptions are tenable: nonbelief—rather than extraneous connotations of the word “atheist”—seems to underlie the effect, and participants exhibited a marked bias even when confronted with an otherwise attractive individual.

  9. Peri and Postparturient Concentrations of Lipid Lipoprotein Insulin and Glucose in Normal Dairy Cows

    OpenAIRE

    BAŞOĞLU, Abdullah; SEVİNÇ, Mutlu; OK, Mahmut

    1998-01-01

    In order to provide uniqe insight into the metabolic disturbences seen after calving cholesterol, triglycerid, high density lipoprotein, low density lipoprotein, very low density lipoprotein, glucose and insulin levels in serum were studied before calving (group I), in aerly (group II) and late (group III) lactation in 24 normal cows. Serum lipoproteins were separeted into various density classes by repeated ultracentrifugation. The results indicate that there was a rise in glucose, trygl...

  10. Sensitivity of the OMI ozone profile retrieval (OMO3PR) to a priori assumptions

    NARCIS (Netherlands)

    Mielonen, T.; De Haan, J.F.; Veefkind, J.P.

    2014-01-01

    We have assessed the sensitivity of the operational OMI ozone profile retrieval (OMO3PR) algorithm to a number of a priori assumptions. We studied the effect of stray light correction, surface albedo assumptions and a priori ozone profiles on the retrieved ozone profile. Then, we studied how to

  11. Radiometric Normalization of Temporal Images Combining Automatic Detection of Pseudo-Invariant Features from the Distance and Similarity Spectral Measures, Density Scatterplot Analysis, and Robust Regression

    Directory of Open Access Journals (Sweden)

    Ana Paula Ferreira de Carvalho

    2013-05-01

    Full Text Available Radiometric precision is difficult to maintain in orbital images due to several factors (atmospheric conditions, Earth-sun distance, detector calibration, illumination, and viewing angles. These unwanted effects must be removed for radiometric consistency among temporal images, leaving only land-leaving radiances, for optimum change detection. A variety of relative radiometric correction techniques were developed for the correction or rectification of images, of the same area, through use of reference targets whose reflectance do not change significantly with time, i.e., pseudo-invariant features (PIFs. This paper proposes a new technique for radiometric normalization, which uses three sequential methods for an accurate PIFs selection: spectral measures of temporal data (spectral distance and similarity, density scatter plot analysis (ridge method, and robust regression. The spectral measures used are the spectral angle (Spectral Angle Mapper, SAM, spectral correlation (Spectral Correlation Mapper, SCM, and Euclidean distance. The spectral measures between the spectra at times t1 and t2 and are calculated for each pixel. After classification using threshold values, it is possible to define points with the same spectral behavior, including PIFs. The distance and similarity measures are complementary and can be calculated together. The ridge method uses a density plot generated from images acquired on different dates for the selection of PIFs. In a density plot, the invariant pixels, together, form a high-density ridge, while variant pixels (clouds and land cover changes are spread, having low density, facilitating its exclusion. Finally, the selected PIFs are subjected to a robust regression (M-estimate between pairs of temporal bands for the detection and elimination of outliers, and to obtain the optimal linear equation for a given set of target points. The robust regression is insensitive to outliers, i.e., observation that appears to deviate

  12. THE COMPLEX OF ASSUMPTION CATHEDRAL OF THE ASTRAKHAN KREMLIN

    Directory of Open Access Journals (Sweden)

    Savenkova Aleksandra Igorevna

    2016-08-01

    Full Text Available This article is devoted to an architectural and historical analysis of the constructions forming a complex of Assumption Cathedral of the Astrakhan Kremlin, which earlier hasn’t been considered as a subject of special research. Basing on the archival sources, photographic materials, publications and on-site investigations of monuments, the creation history of the complete architectural complex sustained in one style of the Muscovite baroque, unique in its composite construction, is considered. Its interpretation in the all-Russian architectural context is offered. Typological features of single constructions come to light. The typology of the Prechistinsky bell tower has an untypical architectural solution - “hexagonal structure on octagonal and quadrangular structures”. The way of connecting the building of the Cathedral and the chambers by the passage was characteristic of monastic constructions and was exclusively seldom in kremlins, farmsteads and ensembles of city cathedrals. The composite scheme of the Assumption Cathedral includes the Lobnoye Mesto (“the Place of Execution” located on an axis from the West, it is connected with the main building by a quarter-turn with landing. The only prototype of the structure is a Lobnoye Mesto on the Red Square in Moscow. In the article the version about the emergence of the Place of Execution on the basis of earlier existing construction - a tower “the Peal” which is repeatedly mentioned in written sources in connection with S. Razin’s revolt is considered. The metropolitan Sampson, trying to keep the value of the Astrakhan metropolitanate, builds the Assumption Cathedral and the Place of Execution directly appealing to a capital prototype to emphasize the continuity and close connection with Moscow.

  13. Physics of collisionless scrape-off-layer plasma during normal and off-normal Tokamak operating conditions

    International Nuclear Information System (INIS)

    Hassanein, A.; Konkashbaev, I.

    1999-01-01

    The structure of a collisionless scrape-off-layer (SOL) plasma in tokamak reactors is being studied to define the electron distribution function and the corresponding sheath potential between the divertor plate and the edge plasma. The collisionless model is shown to be valid during the thermal phase of a plasma disruption, as well as during the newly desired low-recycling normal phase of operation with low-density, high-temperature, edge plasma conditions. An analytical solution is developed by solving the Fokker-Planck equation for electron distribution and balance in the SOL. The solution is in good agreement with numerical studies using Monte-Carlo methods. The analytical solutions provide an insight to the role of different physical and geometrical processes in a collisionless SOL during disruptions and during the enhanced phase of normal operation over a wide range of parameters

  14. Contrasting cue-density effects in causal and prediction judgments.

    Science.gov (United States)

    Vadillo, Miguel A; Musca, Serban C; Blanco, Fernando; Matute, Helena

    2011-02-01

    Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.

  15. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    Science.gov (United States)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  16. A review and assessment of variable density ground water flow effects on plume formation at UMTRA project sites

    International Nuclear Information System (INIS)

    1995-01-01

    A standard assumption when evaluating the migration of plumes in ground water is that the impacted ground water has the same density as the native ground water. Thus density is assumed to be constant, and does not influence plume migration. This assumption is valid only for water with relatively low total dissolved solids (TDS) or a low difference in TDS between water introduced from milling processes and native ground water. Analyses in the literature suggest that relatively minor density differences can significantly affect plume migration. Density differences as small as 0.3 percent are known to cause noticeable effects on the plume migration path. The primary effect of density on plume migration is deeper migration than would be expected in the arid environments typically present at Uranium Mill Tailings Remedial Action (UMTRA) Project sites, where little or no natural recharge is available to drive the plume into the aquifer. It is also possible that at some UMTRA Project sites, a synergistic affect occurred during milling operations, where the mounding created by tailings drainage (which created a downward vertical gradient) and the density contrast between the process water and native ground water acted together, driving constituents deeper into the aquifer than either process would alone. Numerical experiments were performed with the U.S. Geological Survey saturated unsaturated transport (SUTRA) model. This is a finite-element model capable of simulating the effects of variable fluid density on ground water flow and solute transport. The simulated aquifer parameters generally are representative of the Shiprock, New Mexico, UMTRA Project site where some of the highest TDS water from processing has been observed

  17. Two-dimensional electrodynamic structure of the normal glow discharge in an axial magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Surzhikov, S. T., E-mail: surg@ipmnet.ru [Russian Academy of Sciences, Institute for Problems in Mechanics (Russian Federation)

    2017-03-15

    Results are presented from numerical simulations of an axisymmetric normal glow discharge in molecular hydrogen and molecular nitrogen in an axial magnetic field. The charged particle densities and averaged azimuthal rotation velocities of electrons and ions are studied as functions of the gas pressure in the range of 1–5 Torr, electric field strength in the range of 100–600 V/cm, and magnetic field in the range of 0.01–0.3 T. It is found that the axial magnetic field does not disturb the normal current density law.

  18. Leakage-Resilient Circuits without Computational Assumptions

    DEFF Research Database (Denmark)

    Dziembowski, Stefan; Faust, Sebastian

    2012-01-01

    Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... on computational assumptions, our results are purely information-theoretic. In particular, we do not make use of public key encryption, which was required in all previous works...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied...

  19. Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet

    2016-01-01

    In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....

  20. Quantitative measurement of lung density with x-ray CT and positron CT, (2)

    International Nuclear Information System (INIS)

    Ito, Kengo; Ito, Masatoshi; Kubota, Kazuo

    1985-01-01

    Lung density was quantitatively measured on six diseased patients with X-ray CT (XCT) and Positron CT(PCT). The findings are as follows: In the silicosis, extravascular lung density was found to be remarkably increased compared to normals (0.29gcm -3 ), but blood volume was in normal range. In the post-irradiated lung cancers, extravascular lung density increased in the irradiated sites compared to the non-irradiated opposite sites, and blood volume varied in each case. In a patient with chronic heart failure, blood volume decreased (0.11mlcm -3 ) with increased extravascular lung density (0.23gcm -3 ). In the chronic obstructive pulmonary disease, both extravascular lung density and blood volume decreased (0.11gcm -3 and 0.10mlcm -3 respectively). Lung density measured with XCT was constantly lower than that with PCT in all cases. But changes in the values of lung density measured, correlated well with each other. In conclusion, the method presented here may clarify the etiology of the diffuse pulmonary diseases, and be used to differentiate and grade the diseases. (author)

  1. Thyroid Stimulating Hormone and Bone Mineral Density

    DEFF Research Database (Denmark)

    van Vliet, Nicolien A; Noordam, Raymond; van Klinken, Jan B

    2018-01-01

    With population aging, prevalence of low bone mineral density (BMD) and associated fracture risk are increased. To determine whether low circulating thyroid stimulating hormone (TSH) levels within the normal range are causally related to BMD, we conducted a two-sample Mendelian randomization (MR...

  2. Blood Density Is Nearly Equal to Water Density: A Validation Study of the Gravimetric Method of Measuring Intraoperative Blood Loss.

    Science.gov (United States)

    Vitello, Dominic J; Ripper, Richard M; Fettiplace, Michael R; Weinberg, Guy L; Vitello, Joseph M

    2015-01-01

    Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R (2) = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R (2) value was 0.1767. Conclusions. The R (2) value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water.

  3. Effect of sex hormones on bone density during growth

    International Nuclear Information System (INIS)

    Gilsanz, V.; Roe, T.F.; Wells, T.R.; Senac, M.O. Jr.; Landing, B.; Libaneti, C.; Cann, C.E.; Schulz, E.

    1986-01-01

    The development of special phantoms permitted precise measurement of vertebral mineral content by CT in the very young. The normal standards for spinal trabecular bone of children aged 0-18 years are presented. Although there is no age-related difference in bone density before puberty, there is a significant increase in bone mineral content after puberty. The increase in sex hormones during puberty accounts for the increased density. Longitudinal studies analyzing vertebral density changes in castrated rabbits after testosterone and estradiol administration are discussed

  4. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  5. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  6. Normal CT measurement of sellar and juxtasellar structures

    International Nuclear Information System (INIS)

    Kim, Bo Hyun; Chung, Jin Wook; Han, Moon Hee; Chang, Kee Hyun

    1988-01-01

    The thorough knowledge of precise anatomy of the sellar, parasellar and suprasellar regions and of normal computed tomographic (CT) features in those areas are very important because there are many tiny but essential structures in which pathology makes only minute changes and so meticulous investigations are required. We performed direct coronal thin section CT scan of sellar and juxtasellar regions in 58 cases in order to evaluate normal CT features such as CT densities, shapes and sizes of normal sellar and juxtasellar structures. The results obtained are as follows: 1.The CT densities of pituitary glands were 87 ± 23 in anterior lobe and 69 ± 22 in posterior lobe. The latter was significantly less dense than the former. Posterior lobes could be identified as oval low density area in sagittal reconstruction in 18/58 (31%). Mean pituitary height was 6.5 ± 1.5mm. In young females of childbearing age, mean height was 7.0 ± 1.7mm. Upper margins of pituitary glands were flat in 29 cases (50%), upward convex in 16 cases (28%), and upward concave in 13 cases (22%). Upper margins of pituitary glands were upward convex in 8/15 (53%) of young female of childbearing age. Pituitary densities were homogeneous in 36 cases (77%) and heterogeneous in 7 cases (15%), and 4 cases (7%) show focal pituitary low density that is greater than 3mm in diameter. 2.Moderate to severe degree of cisternal herniation was found in 10 cases (17%): only 1 case before the age of 30, and 9 cases after the age of 30. 3.The lateral margins of cavernous sinus were bilaterally flat in 42 cases (72%), bilaterally convex in 3 cases (5%), unilaterally convex in 12 cases (21%), and unilaterally concave only in 1 case (2%). The third cranial nerves were found as symmetric filling defects in superolateral aspect of anterior cavernous sinus in most of the cases, the maximal size of which was 2.7 ± 0.9mm in diameter and did not exceed 3.5mm. The other cranial nerves were less frequently identified as

  7. Testing a key assumption in animal communication: between-individual variation in female visual systems alters perception of male signals

    Directory of Open Access Journals (Sweden)

    Kelly L. Ronald

    2017-12-01

    Full Text Available Variation in male signal production has been extensively studied because of its relevance to animal communication and sexual selection. Although we now know much about the mechanisms that can lead to variation between males in the properties of their signals, there is still a general assumption that there is little variation in terms of how females process these male signals. Variation between females in signal processing may lead to variation between females in how they rank individual males, meaning that one single signal may not be universally attractive to all females. We tested this assumption in a group of female wild-caught brown-headed cowbirds (Molothrus ater, a species that uses a male visual signal (e.g. a wingspread display to make its mate-choice decisions. We found that females varied in two key parameters of their visual sensory systems related to chromatic and achromatic vision: cone densities (both total and proportions and cone oil droplet absorbance. Using visual chromatic and achromatic contrast modeling, we then found that this between-individual variation in visual physiology leads to significant between-individual differences in how females perceive chromatic and achromatic male signals. These differences may lead to variation in female preferences for male visual signals, which would provide a potential mechanism for explaining individual differences in mate-choice behavior.

  8. Testing a key assumption in animal communication: between-individual variation in female visual systems alters perception of male signals.

    Science.gov (United States)

    Ronald, Kelly L; Ensminger, Amanda L; Shawkey, Matthew D; Lucas, Jeffrey R; Fernández-Juricic, Esteban

    2017-12-15

    Variation in male signal production has been extensively studied because of its relevance to animal communication and sexual selection. Although we now know much about the mechanisms that can lead to variation between males in the properties of their signals, there is still a general assumption that there is little variation in terms of how females process these male signals. Variation between females in signal processing may lead to variation between females in how they rank individual males, meaning that one single signal may not be universally attractive to all females. We tested this assumption in a group of female wild-caught brown-headed cowbirds ( Molothrus ater ), a species that uses a male visual signal (e.g. a wingspread display) to make its mate-choice decisions. We found that females varied in two key parameters of their visual sensory systems related to chromatic and achromatic vision: cone densities (both total and proportions) and cone oil droplet absorbance. Using visual chromatic and achromatic contrast modeling, we then found that this between-individual variation in visual physiology leads to significant between-individual differences in how females perceive chromatic and achromatic male signals. These differences may lead to variation in female preferences for male visual signals, which would provide a potential mechanism for explaining individual differences in mate-choice behavior. © 2017. Published by The Company of Biologists Ltd.

  9. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    Science.gov (United States)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  10. A pdf-Free Change Detection Test Based on Density Difference Estimation.

    Science.gov (United States)

    Bu, Li; Alippi, Cesare; Zhao, Dongbin

    2018-02-01

    The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.

  11. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  12. Origins and Traditions in Comparative Education: Challenging Some Assumptions

    Science.gov (United States)

    Manzon, Maria

    2018-01-01

    This article questions some of our assumptions about the history of comparative education. It explores new scholarship on key actors and ways of knowing in the field. Building on the theory of the social constructedness of the field of comparative education, the paper elucidates how power shapes our scholarly histories and identities.

  13. Thermodynamic bounds for existence of normal shock in compressible fluid flow in pipes

    Directory of Open Access Journals (Sweden)

    SERGIO COLLE

    Full Text Available Abstract The present paper is concerned with the thermodynamic theory of the normal shock in compressible fluid flow in pipes, in the lights of the pioneering works of Lord Rayleigh and G. Fanno. The theory of normal shock in pipes is currently presented in terms of the Rayleigh and Fanno curves, which are shown to cross each other in two points, one corresponding to a subsonic flow and the other corresponding to a supersonic flow. It is proposed in this paper a novel differential identity, which relates the energy flux density, the linear momentum flux density, and the entropy, for constant mass flow density. The identity so obtained is used to establish a theorem, which shows that Rayleigh and Fanno curves become tangent to each other at a single sonic point. At the sonic point the entropy reaches a maximum, either as a function of the pressure and the energy density flux or as a function of the pressure and the linear momentum density flux. A Second Law analysis is also presented, which is fully independent of the Second Law analysis based on the Rankine-Hugoniot adiabatic carried out by Landau and Lifshitz (1959.

  14. Sleep spindle density in narcolepsy

    DEFF Research Database (Denmark)

    Christensen, Julie Anja Engelhard; Nikolic, Miki; Hvidtfelt, Mathias

    2017-01-01

    BACKGROUND: Patients with narcolepsy type 1 (NT1) show alterations in sleep stage transitions, rapid-eye-movement (REM) and non-REM sleep due to the loss of hypocretinergic signaling. However, the sleep microstructure has not yet been evaluated in these patients. We aimed to evaluate whether...... the sleep spindle (SS) density is altered in patients with NT1 compared to controls and patients with narcolepsy type 2 (NT2). METHODS: All-night polysomnographic recordings from 28 NT1 patients, 19 NT2 patients, 20 controls (C) with narcolepsy-like symptoms, but with normal cerebrospinal fluid hypocretin...... levels and multiple sleep latency tests, and 18 healthy controls (HC) were included. Unspecified, slow, and fast SS were automatically detected, and SS densities were defined as number per minute and were computed across sleep stages and sleep cycles. The between-cycle trends of SS densities in N2...

  15. Normal zone soliton in large composite superconductors

    International Nuclear Information System (INIS)

    Kupferman, R.; Mints, R.G.; Ben-Jacob, E.

    1992-01-01

    The study of normal zone of finite size (normal domains) in superconductors, has been continuously a subject of interest in the field of applied superconductivity. It was shown that in homogeneous superconductors normal domains are always unstable, so that if a normal domain nucleates, it will either expand or shrink. While testing the stability of large cryostable composite superconductors, a new phenomena was found, the existence of stable propagating normal solitons. The formation of these propagating domains was shown to be a result of the high Joule power generated in the superconductor during the relatively long process of current redistribution between the superconductor and the stabilizer. Theoretical studies were performed in investigate the propagation of normal domains in large composite super conductors in the cryostable regime. Huang and Eyssa performed numerical calculations simulating the diffusion of heat and current redistribution in the conductor, and showed the existence of stable propagating normal domains. They compared the velocity of normal domain propagation with the experimental data, obtaining a reasonable agreement. Dresner presented an analytical method to solve this problem if the time dependence of the Joule power is given. He performed explicit calculations of normal domain velocity assuming that the Joule power decays exponentially during the process of current redistribution. In this paper, the authors propose a system of two one-dimensional diffusion equations describing the dynamics of the temperature and the current density distributions along the conductor. Numerical simulations of the equations reconfirm the existence of propagating domains in the cryostable regime, while an analytical investigation supplies an explicit formula for the velocity of the normal domain

  16. The Role of Neutral Atmospheric Dynamics in Cusp Density - 2nd Campaign

    Science.gov (United States)

    2013-12-30

    density enhancement at the CHAMP altitude of 400 km. Then Clemmons et al. (2008) presented observations from Distribution A: Approved for public release...250 km. This appeared to contradict the CHAMP observations, so Clemmons et al. proposed that heating occurred at an altitude above Streak, caused by...temperatures less than 1000 K. The ion temperatures can be related to the speed of the plasma as shown by St Maurice and Hanson (1982) using the assumption

  17. Normal-metal quasiparticle traps for superconducting qubits

    Energy Technology Data Exchange (ETDEWEB)

    Hosseinkhani, Amin [Peter Grunberg Institute (PGI-2), Forschungszentrum Julich, D-52425 Julich (Germany); JARA-Institute for Quantum Information, RWTH Aachen University, D-52056 Aachen (Germany)

    2016-07-01

    Superconducting qubits are promising candidates to implement quantum computation, and have been a subject of intensive research in the past decade. Excitations of a superconductor, known as quasiparticles, can reduce the qubit performance by causing relaxation; the relaxation rate is proportional to the density of quasiparticles tunneling through Josephson junction. Here, we consider engineering quasiparticle traps by covering parts of a superconducting device with normal-metal islands. We utilize a phenomenological quasiparticle diffusion model to study both the decay rate of excess quasiparticles and the steady-state profile of the quasiparticle density in the device. We apply the model to various realistic configurations to explore the role of geometry and location of the traps.

  18. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  19. Reward value-based gain control: divisive normalization in parietal cortex.

    Science.gov (United States)

    Louie, Kenway; Grattan, Lauren E; Glimcher, Paul W

    2011-07-20

    The representation of value is a critical component of decision making. Rational choice theory assumes that options are assigned absolute values, independent of the value or existence of other alternatives. However, context-dependent choice behavior in both animals and humans violates this assumption, suggesting that biological decision processes rely on comparative evaluation. Here we show that neurons in the monkey lateral intraparietal cortex encode a relative form of saccadic value, explicitly dependent on the values of the other available alternatives. Analogous to extra-classical receptive field effects in visual cortex, this relative representation incorporates target values outside the response field and is observed in both stimulus-driven activity and baseline firing rates. This context-dependent modulation is precisely described by divisive normalization, indicating that this standard form of sensory gain control may be a general mechanism of cortical computation. Such normalization in decision circuits effectively implements an adaptive gain control for value coding and provides a possible mechanistic basis for behavioral context-dependent violations of rationality.

  20. Monte Carlo neutral density calculations for ELMO Bumpy Torus

    International Nuclear Information System (INIS)

    Davis, W.A.; Colchin, R.J.

    1986-11-01

    The steady-state nature of the ELMO Bumpy Torus (EBT) plasma implies that the neutral density at any point inside the plasma volume will determine the local particle confinement time. This paper describes a Monte Carlo calculation of three-dimensional atomic and molecular neutral density profiles in EBT. The calculation has been done using various models for neutral source points, for launching schemes, for plasma profiles, and for plasma densities and temperatures. Calculated results are compared with experimental observations - principally spectroscopic measurements - both for guidance in normalization and for overall consistency checks. Implications of the predicted neutral profiles for the fast-ion-decay measurement of neutral densities are also addressed

  1. Normal gravity field in relativistic geodesy

    Science.gov (United States)

    Kopeikin, Sergei; Vlasov, Igor; Han, Wen-Biao

    2018-02-01

    Modern geodesy is subject to a dramatic change from the Newtonian paradigm to Einstein's theory of general relativity. This is motivated by the ongoing advance in development of quantum sensors for applications in geodesy including quantum gravimeters and gradientometers, atomic clocks and fiber optics for making ultra-precise measurements of the geoid and multipolar structure of the Earth's gravitational field. At the same time, very long baseline interferometry, satellite laser ranging, and global navigation satellite systems have achieved an unprecedented level of accuracy in measuring 3-d coordinates of the reference points of the International Terrestrial Reference Frame and the world height system. The main geodetic reference standard to which gravimetric measurements of the of Earth's gravitational field are referred is a normal gravity field represented in the Newtonian gravity by the field of a uniformly rotating, homogeneous Maclaurin ellipsoid of which mass and quadrupole momentum are equal to the total mass and (tide-free) quadrupole moment of Earth's gravitational field. The present paper extends the concept of the normal gravity field from the Newtonian theory to the realm of general relativity. We focus our attention on the calculation of the post-Newtonian approximation of the normal field that is sufficient for current and near-future practical applications. We show that in general relativity the level surface of homogeneous and uniformly rotating fluid is no longer described by the Maclaurin ellipsoid in the most general case but represents an axisymmetric spheroid of the fourth order with respect to the geodetic Cartesian coordinates. At the same time, admitting a post-Newtonian inhomogeneity of the mass density in the form of concentric elliptical shells allows one to preserve the level surface of the fluid as an exact ellipsoid of rotation. We parametrize the mass density distribution and the level surface with two parameters which are

  2. The effects of the Boussinesq model to the rising of the explosion clouds

    International Nuclear Information System (INIS)

    Li Xiaoli; Zheng Yi

    2010-01-01

    It is to study the rising of the explosion clouds in the normal atmosphere using Boussinesq model and the Incompressible model, the numerical model is based on the assumption that effects the clouds are gravity and buoyancy. By comparing the evolvement of different density cloud, and gives the conclusion-the Boussinesq model and the Incompressible model is accord when the cloud's density is larger compared to the density of the environment. (authors)

  3. Robustness to non-normality of common tests for the many-sample location problem

    Directory of Open Access Journals (Sweden)

    Azmeri Khan

    2003-01-01

    Full Text Available This paper studies the effect of deviating from the normal distribution assumption when considering the power of two many-sample location test procedures: ANOVA (parametric and Kruskal-Wallis (non-parametric. Power functions for these tests under various conditions are produced using simulation, where the simulated data are produced using MacGillivray and Cannon's [10] recently suggested g-and-k distribution. This distribution can provide data with selected amounts of skewness and kurtosis by varying two nearly independent parameters.

  4. Validity of the isotropic thermal conductivity assumption in supercell lattice dynamics

    Science.gov (United States)

    Ma, Ruiyuan; Lukes, Jennifer R.

    2018-02-01

    Superlattices and nano phononic crystals have attracted significant attention due to their low thermal conductivities and their potential application as thermoelectric materials. A widely used expression to calculate thermal conductivity, presented by Klemens and expressed in terms of the relaxation time by Callaway and Holland, originates from the Boltzmann transport equation. In its most general form, this expression involves a direct summation of the heat current contributions from individual phonons of all wavevectors and polarizations in the first Brillouin zone. In common practice, the expression is simplified by making an isotropic assumption that converts the summation over wavevector to an integral over wavevector magnitude. The isotropic expression has been applied to superlattices and phononic crystals, but its validity for different supercell sizes has not been studied. In this work, the isotropic and direct summation methods are used to calculate the thermal conductivities of bulk Si, and Si/Ge quantum dot superlattices. The results show that the differences between the two methods increase substantially with the supercell size. These differences arise because the vibrational modes neglected in the isotropic assumption provide an increasingly important contribution to the thermal conductivity for larger supercells. To avoid the significant errors that can result from the isotropic assumption, direct summation is recommended for thermal conductivity calculations in superstructures.

  5. AlN/GaN heterostructures for normally-off transistors

    Energy Technology Data Exchange (ETDEWEB)

    Zhuravlev, K. S., E-mail: zhur@isp.nsc.ru; Malin, T. V.; Mansurov, V. G.; Tereshenko, O. E. [Russian Academy of Sciences, Rzhanov Institute of Semiconductor Physics, Siberian Branch (Russian Federation); Abgaryan, K. K.; Reviznikov, D. L. [Dorodnicyn Computing Centre of the Russian Academy of Sciences (Russian Federation); Zemlyakov, V. E.; Egorkin, V. I. [National Research University of Electronic Technology (MIET) (Russian Federation); Parnes, Ya. M.; Tikhomirov, V. G. [Joint Stock Company “Svetlana-Electronpribor” (Russian Federation); Prosvirin, I. P. [Russian Academy of Sciences, Boreskov Institute of Catalysis, Siberian Branch (Russian Federation)

    2017-03-15

    The structure of AlN/GaN heterostructures with an ultrathin AlN barrier is calculated for normally-off transistors. The molecular-beam epitaxy technology of in situ passivated SiN/AlN/GaN heterostructures with a two-dimensional electron gas is developed. Normally-off transistors with a maximum current density of ~1 A/mm, a saturation voltage of 1 V, a transconductance of 350 mS/mm, and a breakdown voltage of more than 60 V are demonstrated. Gate lag and drain lag effects are almost lacking in these transistors.

  6. Antioxidant and Hypolipidemic Effects of Olive Oil in Normal and Diabetic Male Rats

    International Nuclear Information System (INIS)

    Alhazza, I. M.

    2007-01-01

    Diabetes mellitus manifests itself in a wide variety of complications and the symptoms of the disease are multifactorial. The lipid hydroperoxide level and lipid profile were investigated in plasma of normal and Alloxan-induced diabetic rats treated with olive oil for six weeks. Diabetic rats exhibited an increase in the levels of hydroperoxide, cholesterol, triglycerides and low density lipoprotein (LDL), and a decrease in the level of high density lipoprotein (HDL). The administration of olive oil showed a better profile in the lipid as well as decreases in the concentration of lipid hydroperoxides either in normal or diabetic rats. The results are discussed according to antioxidant property of olive oil. (author)

  7. Behavioural assumptions in labour economics: Analysing social security reforms and labour market transitions

    OpenAIRE

    van Huizen, T.M.

    2012-01-01

    The aim of this dissertation is to test behavioural assumptions in labour economics models and thereby improve our understanding of labour market behaviour. The assumptions under scrutiny in this study are derived from an analysis of recent influential policy proposals: the introduction of savings schemes in the system of social security. A central question is how this reform will affect labour market incentives and behaviour. Part I (Chapter 2 and 3) evaluates savings schemes. Chapter 2 exam...

  8. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    Science.gov (United States)

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  9. Fair-sampling assumption is not necessary for testing local realism

    International Nuclear Information System (INIS)

    Berry, Dominic W.; Jeong, Hyunseok; Stobinska, Magdalena; Ralph, Timothy C.

    2010-01-01

    Almost all Bell inequality experiments to date have used postselection and therefore relied on the fair sampling assumption for their interpretation. The standard form of the fair sampling assumption is that the loss is independent of the measurement settings, so the ensemble of detected systems provides a fair statistical sample of the total ensemble. This is often assumed to be needed to interpret Bell inequality experiments as ruling out hidden-variable theories. Here we show that it is not necessary; the loss can depend on measurement settings, provided the detection efficiency factorizes as a function of the measurement settings and any hidden variable. This condition implies that Tsirelson's bound must be satisfied for entangled states. On the other hand, we show that it is possible for Tsirelson's bound to be violated while the Clauser-Horne-Shimony-Holt (CHSH)-Bell inequality still holds for unentangled states, and present an experimentally feasible example.

  10. The retest distribution of the visual field summary index mean deviation is close to normal.

    Science.gov (United States)

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  11. Bank stress testing under different balance sheet assumptions

    OpenAIRE

    Busch, Ramona; Drescher, Christian; Memmel, Christoph

    2017-01-01

    Using unique supervisory survey data on the impact of a hypothetical interest rate shock on German banks, we analyse price and quantity effects on banks' net interest margin components under different balance sheet assumptions. In the first year, the cross-sectional variation of banks' simulated price effect is nearly eight times as large as the one of the simulated quantity effect. After five years, however, the importance of both effects converges. Large banks adjust their balance sheets mo...

  12. Are Prescription Opioids Driving the Opioid Crisis? Assumptions vs Facts.

    Science.gov (United States)

    Rose, Mark Edmund

    2018-04-01

    Sharp increases in opioid prescriptions, and associated increases in overdose deaths in the 2000s, evoked widespread calls to change perceptions of opioid analgesics. Medical literature discussions of opioid analgesics began emphasizing patient and public health hazards. Repetitive exposure to this information may influence physician assumptions. While highly consequential to patients with pain whose function and quality of life may benefit from opioid analgesics, current assumptions about prescription opioid analgesics, including their role in the ongoing opioid overdose epidemic, have not been scrutinized. Information was obtained by searching PubMed, governmental agency websites, and conference proceedings. Opioid analgesic prescribing and associated overdose deaths both peaked around 2011 and are in long-term decline; the sharp overdose increase recorded in 2014 was driven by illicit fentanyl and heroin. Nonmethadone prescription opioid analgesic deaths, in the absence of co-ingested benzodiazepines, alcohol, or other central nervous system/respiratory depressants, are infrequent. Within five years of initial prescription opioid misuse, 3.6% initiate heroin use. The United States consumes 80% of the world opioid supply, but opioid access is nonexistent for 80% and severely restricted for 4.1% of the global population. Many current assumptions about opioid analgesics are ill-founded. Illicit fentanyl and heroin, not opioid prescribing, now fuel the current opioid overdose epidemic. National discussion has often neglected the potentially devastating effects of uncontrolled chronic pain. Opioid analgesic prescribing and related overdoses are in decline, at great cost to patients with pain who have benefited or may benefit from, but cannot access, opioid analgesic therapy.

  13. Atlas-based head modeling and spatial normalization for high-density diffuse optical tomography: in vivo validation against fMRI.

    Science.gov (United States)

    Ferradal, Silvina L; Eggebrecht, Adam T; Hassanpour, Mahlega; Snyder, Abraham Z; Culver, Joseph P

    2014-01-15

    Diffuse optical imaging (DOI) is increasingly becoming a valuable neuroimaging tool when fMRI is precluded. Recent developments in high-density diffuse optical tomography (HD-DOT) overcome previous limitations of sparse DOI systems, providing improved image quality and brain specificity. These improvements in instrumentation prompt the need for advancements in both i) realistic forward light modeling for accurate HD-DOT image reconstruction, and ii) spatial normalization for voxel-wise comparisons across subjects. Individualized forward light models derived from subject-specific anatomical images provide the optimal inverse solutions, but such modeling may not be feasible in all situations. In the absence of subject-specific anatomical images, atlas-based head models registered to the subject's head using cranial fiducials provide an alternative solution. In addition, a standard atlas is attractive because it defines a common coordinate space in which to compare results across subjects. The question therefore arises as to whether atlas-based forward light modeling ensures adequate HD-DOT image quality at the individual and group level. Herein, we demonstrate the feasibility of using atlas-based forward light modeling and spatial normalization methods. Both techniques are validated using subject-matched HD-DOT and fMRI data sets for visual evoked responses measured in five healthy adult subjects. HD-DOT reconstructions obtained with the registered atlas anatomy (i.e. atlas DOT) had an average localization error of 2.7mm relative to reconstructions obtained with the subject-specific anatomical images (i.e. subject-MRI DOT), and 6.6mm relative to fMRI data. At the group level, the localization error of atlas DOT reconstruction was 4.2mm relative to subject-MRI DOT reconstruction, and 6.1mm relative to fMRI. These results show that atlas-based image reconstruction provides a viable approach to individual head modeling for HD-DOT when anatomical imaging is not available

  14. Moving from assumption to observation: Implications for energy and emissions impacts of plug-in hybrid electric vehicles

    International Nuclear Information System (INIS)

    Davies, Jamie; Kurani, Kenneth S.

    2013-01-01

    Plug-in hybrid electric vehicles (PHEVs) are currently for sale in most parts of the United States, Canada, Europe and Japan. These vehicles are promoted as providing distinct consumer and public benefits at the expense of grid electricity. However, the specific benefits or impacts of PHEVs ultimately relies on consumers purchase and vehicle use patterns. While considerable effort has been dedicated to understanding PHEV impacts on a per mile basis few studies have assessed the impacts of PHEV given actual consumer use patterns or operating conditions. Instead, simplifying assumptions have been made about the types of cars individual consumers will choose to purchase and how they will drive and charge them. Here, we highlight some of these consumer purchase and use assumptions, studies which have employed these assumptions and compare these assumptions to actual consumer data recorded in a PHEV demonstration project. Using simulation and hypothetical scenarios we discuss the implication for PHEV impact analyses and policy if assumptions about key PHEV consumer use variables such as vehicle choice, home charging frequency, distribution of driving distances, and access to workplace charging were to change. -- Highlights: •The specific benefits or impacts of PHEVs ultimately relies on consumers purchase and vehicle use patterns. •Simplifying, untested, assumptions have been made by prior studies about PHEV consumer driving, charging and vehicle purchase behaviors. •Some simplifying assumptions do not match observed data from a PHEV demonstration project. •Changing the assumptions about PHEV consumer driving, charging, and vehicle purchase behaviors affects estimates of PHEV impacts. •Premature simplification may have lasting consequences for standard setting and performance based incentive programs which rely on these estimates

  15. Extracting a mix parameter from 2D radiography of variable density flow

    Science.gov (United States)

    Kurien, Susan; Doss, Forrest; Livescu, Daniel

    2017-11-01

    A methodology is presented for extracting quantities related to the statistical description of the mixing state from the 2D radiographic image of a flow. X-ray attenuation through a target flow is given by the Beer-Lambert law which exponentially damps the incident beam intensity by a factor proportional to the density, opacity and thickness of the target. By making reasonable assumptions for the mean density, opacity and effective thickness of the target flow, we estimate the contribution of density fluctuations to the attenuation. The fluctuations thus inferred may be used to form the correlation of density and specific-volume, averaged across the thickness of the flow in the direction of the beam. This correlation function, denoted by b in RANS modeling, quantifies turbulent mixing in variable density flows. The scheme is tested using DNS data computed for variable-density buoyancy-driven mixing. We quantify the deficits in the extracted value of b due to target thickness, Atwood number, and modeled noise in the incident beam. This analysis corroborates the proposed scheme to infer the mix parameter from thin targets at moderate to low Atwood numbers. The scheme is then applied to an image of counter-shear flow obtained from experiments at the National Ignition Facility. US Department of Energy.

  16. Relative density: the key to stocking assessment in regional analysis—a forest survey viewpoint.

    Science.gov (United States)

    Colin D. MacLean

    1979-01-01

    Relative density is a measure of tree crowding compared to a reference level such as normal density. This stand attribute, when compared to management standards, indicates adequacy of stocking. The Pacific Coast Forest Survey Unit assesses the relative density of each stand sampled by summing the individual density contributions of each tree tallied, thus quantifying...

  17. Exact statistical results for binary mixing and reaction in variable density turbulence

    Science.gov (United States)

    Ristorcelli, J. R.

    2017-02-01

    We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ ⁣2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ ⁣2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived

  18. Generating log-normal mock catalog of galaxies in redshift space

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun; Komatsu, Eiichiro [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany); Chiang, Chi-Ting [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Jeong, Donghui, E-mail: aniket@mpa-garching.mpg.de, E-mail: makiya@mpa-garching.mpg.de, E-mail: chi-ting.chiang@stonybrook.edu, E-mail: djeong@psu.edu, E-mail: ssaito@mpa-garching.mpg.de, E-mail: komatsu@mpa-garching.mpg.de [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States)

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  19. Data-driven smooth tests of the proportional hazards assumption

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2007-01-01

    Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007

  20. Non-primordial origin of the cosmic background radiation and pregalactic density fluctuations

    International Nuclear Information System (INIS)

    Froehlich, H.E.; Mueller, V.; Oleak, H.

    1984-01-01

    Assumptions of a tepid Universe and a smaller primordial contribution to the 3 K background are made to show that Pop III stars may be responsible for the 3 K background and cosmic ray entropy. The 3 K background would be caused by thermalized stellar radiation produced by metallized intergalactic dust formed in first generation stars. A range of mass scales and amplification factors of density perturbations in the early Universe is examined below the Jeans mass for gravitational instabilities. The density perturbations that could have been present at small enough mass scales could have survived and generated sonic modes that propagated through the plasma era and, when combined with additional gravitationally unstable entropy disturbances after recombination, triggered the formation of Pop III stars. 13 references

  1. Lack of biochemical hypogonadism in elderly Arab males with low bone mineral density disease.

    Science.gov (United States)

    Al Attia, Haider M; Jaysundaram, Krishnasamy; Saraj, Fouad

    2010-01-01

    The aim of the study is to study the relationship between androgen levels and bone mineral density (BMD) in elderly Arab males. Forty-five elderly Arab males underwent Dual X-ray absorptiometry for measurement of BMD. The outcomes were defined as per WHO description. Assays for testosterone (T), gonadotropins (LH and FSH) and estradiol (E2), in the serum were carried out. The ratio of T/LH was used as a surrogate for the cFT assay. We excluded patients receiving hormonal ablation for prostatic neoplasm and patients with chronic liver or renal disease and patients receiving corticosteroids. Twelve were osteoporotic (26.5%); 22 osteopenic (49%); and 11(24.5%) had normal outcome. Osteoporotic patients were significantly older (78.17 +/- 7.59 years) than the osteopenic (70.14 +/- 5.92, P Arab males had reduced bone density that appears to be independent of androgen levels. Osteoporotics were significantly older than those with osteopenia or normal bone density. Aging seemed to have overridden the effect of normal sex hormones on bone density in these patients. Before considering these results as a possible exception to the widely established role of the hypoandrogenemia in male osteoporosis, other potential factors impacting on bone density need to be considered.

  2. Weight loss and bone mineral density.

    Science.gov (United States)

    Hunter, Gary R; Plaisance, Eric P; Fisher, Gordon

    2014-10-01

    Despite evidence that energy deficit produces multiple physiological and metabolic benefits, clinicians are often reluctant to prescribe weight loss in older individuals or those with low bone mineral density (BMD), fearing BMD will be decreased. Confusion exists concerning the effects that weight loss has on bone health. Bone density is more closely associated with lean mass than total body mass and fat mass. Although rapid or large weight loss is often associated with loss of bone density, slower or smaller weight loss is much less apt to adversely affect BMD, especially when it is accompanied with high intensity resistance and/or impact loading training. Maintenance of calcium and vitamin D intake seems to positively affect BMD during weight loss. Although dual energy X-ray absorptiometry is normally used to evaluate bone density, it may overestimate BMD loss following massive weight loss. Volumetric quantitative computed tomography may be more accurate for tracking bone density changes following large weight loss. Moderate weight loss does not necessarily compromise bone health, especially when exercise training is involved. Training strategies that include heavy resistance training and high impact loading that occur with jump training may be especially productive in maintaining, or even increasing bone density with weight loss.

  3. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  4. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  5. Investigating Teachers' and Students' Beliefs and Assumptions about CALL Programme at Caledonian College of Engineering

    Science.gov (United States)

    Ali, Holi Ibrahim Holi

    2012-01-01

    This study is set to investigate students' and teachers' perceptions and assumptions about newly implemented CALL Programme at the School of Foundation Studies, Caledonian College of Engineering, Oman. Two versions of questionnaire were administered to 24 teachers and 90 students to collect their beliefs and assumption about CALL programame. The…

  6. Experimental study on working characteristics of density lock

    International Nuclear Information System (INIS)

    Sun Furong; Yan Changqi; Gu Haifeng

    2011-01-01

    The working principle of density lock was introduced in this paper, and the experimental loop was built so that researches on working performance of density lock in the system were done at steady-state operation and pump trip conditions. The results show that at steady-state operation conditions, density lock can keep close in a long run, which will separate passive residual heat removal circuit from primary circuit. As a result, passive residual heat removal circuit is in the non-operating conditions, which dose not influence normal operation of reactors. At the pump trip conditions, density lock can be automatically opened quickly, which will make primary and passive residual heat removal system communicated. The natural circulation is well established in the two systems, and is enough to ensure removal of residual heat. (authors)

  7. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    Science.gov (United States)

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  8. Electromagnetic considerations for RF current density imaging [MRI technique].

    Science.gov (United States)

    Scott, G C; Joy, M G; Armstrong, R L; Henkelman, R M

    1995-01-01

    Radio frequency current density imaging (RF-CDI) is a recent MRI technique that can image a Larmor frequency current density component parallel to B(0). Because the feasibility of the technique was demonstrated only for homogeneous media, the authors' goal here is to clarify the electromagnetic assumptions and field theory to allow imaging RF currents in heterogeneous media. The complete RF field and current density imaging problem is posed. General solutions are given for measuring lab frame magnetic fields from the rotating frame magnetic field measurements. For the general case of elliptically polarized fields, in which current and magnetic field components are not in phase, one can obtain a modified single rotation approximation. Sufficient information exists to image the amplitude and phase of the RF current density parallel to B(0) if the partial derivative in the B(0) direction of the RF magnetic field (amplitude and phase) parallel to B(0) is much smaller than the corresponding current density component. The heterogeneous extension was verified by imaging conduction and displacement currents in a phantom containing saline and pure water compartments. Finally, the issues required to image eddy currents are presented. Eddy currents within a sample will distort both the transmitter coil reference system, and create measurable rotating frame magnetic fields. However, a three-dimensional electro-magnetic analysis will be required to determine how the reference system distortion affects computed eddy current images.

  9. Limits on the space density of gamma-ray burst sources

    International Nuclear Information System (INIS)

    Epstein, R.I.

    1985-01-01

    Gamma-ray burst spectra which extend to several MeV without significant steepening indicate that there is negligible degradation due to two-photon pair production. The inferred low rate of photon-photon reactions is used to give upper limits to the distances to the sources and to the intensity of the radiation from the sources. These limits are calculated under the assumptions that the bursters are neutron stars which emit uncollimated gamma rays. The principal results are that the space density of the gamma-ray burst sources exceeds approx.10 -6 pc -3 if the entire surface of the neutron star radiates and exceeds approx.10 -3 pc -3 if only a small cap or thin strip in the stellar surface radiates. In the former case the density of gamma-ray bursters is approx.1% of the inferred density of extinct pulsars, and in the latter case the mean mass density of burster sources is a few percent of the density of unidentified dark matter in the solar neighborhood. In both cases the X-ray intensity of the sources is far below the Rayleigh-Jeans limit, and the total flux is at most comparable to the Eddington limit. This implies that low-energy self-absorption near 10 keV is entirely negligible and that radiation-driven explosions are just barely possible

  10. STOCHASTIC PRICING MODEL FOR THE REAL ESTATE MARKET: FORMATION OF LOG-NORMAL GENERAL POPULATION

    Directory of Open Access Journals (Sweden)

    Oleg V. Rusakov

    2015-01-01

    Full Text Available We construct a stochastic model of real estate pricing. The method of the pricing construction is based on a sequential comparison of the supply prices. We proof that under standard assumptions imposed upon the comparison coefficients there exists an unique non-degenerated limit in distribution and this limit has the lognormal law of distribution. The accordance of empirical distributions of prices to thetheoretically obtained log-normal distribution we verify by numerous statistical data of real estate prices from Saint-Petersburg (Russia. For establishing this accordance we essentially apply the efficient and sensitive test of fit of Kolmogorov-Smirnov. Basing on “The Russian Federal Estimation Standard N2”, we conclude that the most probable price, i.e. mode of distribution, is correctly and uniquely defined under the log-normal approximation. Since the mean value of log-normal distribution exceeds the mode - most probable value, it follows that the prices valued by the mathematical expectation are systematically overstated.

  11. Density of liquid Hg(1-x)Cd(x)Te

    Science.gov (United States)

    Chandra, D.; Holland, L. R.

    1983-01-01

    Negative thermal expansion has been established in liquid Hg(1-x)Cd(x)Te for x less than 0.2 employing a pycnometric method. Pure HgTe increases in density from its melting point at 670 C to a maximum value at 750 C, where normal thermal expansion progressively resumes. The dependence of density on temperature for liquid Hg(1-x)Cd(x)Te arises almost exclusively from the HgTe portion of the melt, while CdTe acts as a diluent. The temperature corresponding to the maximum density changes slightly with composition, increasing by about 5 C for x = 0.1.

  12. The law of distribution of light beam direction fluctuations in telescopes. [normal density functions

    Science.gov (United States)

    Divinskiy, M. L.; Kolchinskiy, I. G.

    1974-01-01

    The distribution of deviations from mean star trail directions was studied on the basis of 105 star trails. It was found that about 93% of the trails yield a distribution in agreement with the normal law. About 4% of the star trails agree with the Charlier distribution.

  13. Examining assumptions regarding valid electronic monitoring of medication therapy: development of a validation framework and its application on a European sample of kidney transplant patients

    Directory of Open Access Journals (Sweden)

    Steiger Jürg

    2008-02-01

    Full Text Available Abstract Background Electronic monitoring (EM is used increasingly to measure medication non-adherence. Unbiased EM assessment requires fulfillment of assumptions. The purpose of this study was to determine assumptions needed for internal and external validity of EM measurement. To test internal validity, we examined if (1 EM equipment functioned correctly, (2 if all EM bottle openings corresponded to actual drug intake, and (3 if EM did not influence a patient's normal adherence behavior. To assess external validity, we examined if there were indications that using EM affected the sample representativeness. Methods We used data from the Supporting Medication Adherence in Renal Transplantation (SMART study, which included 250 adult renal transplant patients whose adherence to immunosuppressive drugs was measured during 3 months with the Medication Event Monitoring System (MEMS. Internal validity was determined by assessing the prevalence of nonfunctioning EM systems, the prevalence of patient-reported discrepancies between cap openings and actual intakes (using contemporaneous notes and interview at the end of the study, and by exploring whether adherence was initially uncharacteristically high and decreased over time (an indication of a possible EM intervention effect. Sample representativeness was examined by screening for differences between participants and non-participants or drop outs on non-adherence. Results Our analysis revealed that some assumptions were not fulfilled: 1 one cap malfunctioned (0.4%, 2 self-reported mismatches between bottle openings and actual drug intake occurred in 62% of the patients (n = 155, and 3 adherence decreased over the first 5 weeks of the monitoring, indicating that EM had a waning intervention effect. Conclusion The validity assumptions presented in this article should be checked in future studies using EM as a measure of medication non-adherence.

  14. On extending Kohn-Sham density functionals to systems with fractional number of electrons.

    Science.gov (United States)

    Li, Chen; Lu, Jianfeng; Yang, Weitao

    2017-06-07

    We analyze four ways of formulating the Kohn-Sham (KS) density functionals with a fractional number of electrons, through extending the constrained search space from the Kohn-Sham and the generalized Kohn-Sham (GKS) non-interacting v-representable density domain for integer systems to four different sets of densities for fractional systems. In particular, these density sets are (I) ensemble interacting N-representable densities, (II) ensemble non-interacting N-representable densities, (III) non-interacting densities by the Janak construction, and (IV) non-interacting densities whose composing orbitals satisfy the Aufbau occupation principle. By proving the equivalence of the underlying first order reduced density matrices associated with these densities, we show that sets (I), (II), and (III) are equivalent, and all reduce to the Janak construction. Moreover, for functionals with the ensemble v-representable assumption at the minimizer, (III) reduces to (IV) and thus justifies the previous use of the Aufbau protocol within the (G)KS framework in the study of the ground state of fractional electron systems, as defined in the grand canonical ensemble at zero temperature. By further analyzing the Aufbau solution for different density functional approximations (DFAs) in the (G)KS scheme, we rigorously prove that there can be one and only one fractional occupation for the Hartree Fock functional, while there can be multiple fractional occupations for general DFAs in the presence of degeneracy. This has been confirmed by numerical calculations using the local density approximation as a representative of general DFAs. This work thus clarifies important issues on density functional theory calculations for fractional electron systems.

  15. COBE DMR-normalized open inflation cold dark matter cosmogony

    Science.gov (United States)

    Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.

    1995-01-01

    A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.

  16. MR demonstration of the meninges: Normal and pathological findings

    International Nuclear Information System (INIS)

    Schoerner, W.; Henkes, H.; Sander, B.; Felix, R.

    1988-01-01

    The MR appearance of normal and pathological meninges was studied in 23 patients. Amongst twelve normals, T 1 -weighted images demonstrated the meninges as slightly hyperintense density structures (compared with CSF) which increased in signal intensity somewhat after the administration of gadolinium-DTPA. On T 2 -weighted images, the subarachnoid space and meninges were isointense. In eleven patients with inflammatory disease or tumourous infiltration of the meninges, abnormal findings were evident in the unenhanced images as well as after administration of gadolinium-DTPA. Compared with CT, MR proved greatly superior in the diagnosis of meningeal abnormalities. (orig.) [de

  17. Testing the rationality assumption using a design difference in the TV game show 'Jeopardy'

    OpenAIRE

    Sjögren Lindquist, Gabriella; Säve-Söderbergh, Jenny

    2006-01-01

    Abstract This paper empirically investigates the rationality assumption commonly applied in economic modeling by exploiting a design difference in the game-show Jeopardy between the US and Sweden. In particular we address the assumption of individuals’ capabilities to process complex mathematical problems to find optimal strategies. The vital difference is that US contestants are given explicit information before they act, while Swedish contestants individually need to calculate the same info...

  18. Assumptions for the Annual Energy Outlook 1992

    International Nuclear Information System (INIS)

    1992-01-01

    This report serves a auxiliary document to the Energy Information Administration (EIA) publication Annual Energy Outlook 1992 (AEO) (DOE/EIA-0383(92)), released in January 1992. The AEO forecasts were developed for five alternative cases and consist of energy supply, consumption, and price projections by major fuel and end-use sector, which are published at a national level of aggregation. The purpose of this report is to present important quantitative assumptions, including world oil prices and macroeconomic growth, underlying the AEO forecasts. The report has been prepared in response to external requests, as well as analyst requirements for background information on the AEO and studies based on the AEO forecasts

  19. Attained energy densities and neutral pion spectra in nucleus-nucleus collisions at 200 GeV/nucleon

    International Nuclear Information System (INIS)

    Plasil, F.; Albrecht, R.; Awes, T.C.

    1989-01-01

    The main goal of the CERN heavy-ion experiments is the search for an indication that the predicted state of deconfined quarks and gluons, the quark-gluon plasma (QGP), has been produced. The quantity most crucial to the probability of QGP formation is the thermalized energy density attained during the heavy-ion reaction. The amount of energy radiated transverse to the beam direction is the experimental quantity which is believed to be a measure of the amount of energy deposition in the reaction, and hence to reflect the energy density attained. In this presentation we consider the systematics of transverse energy production at CERN SPS energies, and we use the results to make estimates, under various assumptions, of attained energy densities. 18 refs., 2 figs

  20. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  1. The approximation of the normal distribution by means of chaotic expression

    International Nuclear Information System (INIS)

    Lawnik, M

    2014-01-01

    The approximation of the normal distribution by means of a chaotic expression is achieved by means of Weierstrass function, where, for a certain set of parameters, the density of the derived recurrence renders good approximation of the bell curve

  2. Long time-scale density peaking in JET

    International Nuclear Information System (INIS)

    Sartori, R.; Saibene, G.; Becoulet, M.

    2002-01-01

    This paper discusses how the proximity to the L-H threshold affects the confinement of ELMy H-modes at high density. The largest reduction in confinement at high density is observed at the transition from the Type I to the Type III ELMy regime. At medium plasma triangularity, δ≅0.3 (where δ is the average triangularity at the separatrix), JET experiments show that, by increasing the margin above the L-H threshold power and maintaining the edge temperature above the critical temperature for the transition to Type III ELMs, it is possible to avoid the degradation of the pedestal pressure with density, normally observed at lower power. As a result, the range of achievable densities (both in the core and in the pedestal) is increased. At high power above the L-H threshold power the core density was equal to the Greenwald limit with H97≅0.9. There is evidence that a mixed regime of Type I and Type II ELMs has been obtained at this intermediate triangularity, possibly as a result of this increase in density. At higher triangularity, δ≅0.5, the power required to achieve similar results is lower. (author)

  3. Proximity effect in normal-superconductor hybrids for quasiparticle traps

    Energy Technology Data Exchange (ETDEWEB)

    Hosseinkhani, Amin [Peter Grunberg Institute (PGI-2), Forschungszentrum Julich, D-52425 Julich (Germany); JARA-Institute for Quantum Information, RWTH Aachen University, D-52056 Aachen (Germany)

    2016-07-01

    Coherent transport of charges in the form of Cooper pairs is the main feature of Josephson junctions which plays a central role in superconducting qubits. However, the presence of quasiparticles in superconducting devices may lead to incoherent charge transfer and limit the coherence time of superconducting qubits. A way around this so-called ''quasiparticle poisoning'' might be using a normal-metal island to trap quasiparticles; this has motivated us to revisit the proximity effect in normal-superconductor hybrids. Using the semiclassical Usadel equations, we study the density of states (DoS) both within and away from the trap. We find that in the superconducting layer the DoS quickly approaches the BCS form; this indicates that normal-metal traps should be effective at localizing quasiparticles.

  4. Automatic ethics: the effects of implicit assumptions and contextual cues on moral behavior.

    Science.gov (United States)

    Reynolds, Scott J; Leavitt, Keith; DeCelles, Katherine A

    2010-07-01

    We empirically examine the reflexive or automatic aspects of moral decision making. To begin, we develop and validate a measure of an individual's implicit assumption regarding the inherent morality of business. Then, using an in-basket exercise, we demonstrate that an implicit assumption that business is inherently moral impacts day-to-day business decisions and interacts with contextual cues to shape moral behavior. Ultimately, we offer evidence supporting a characterization of employees as reflexive interactionists: moral agents whose automatic decision-making processes interact with the environment to shape their moral behavior.

  5. A critical assessment of the ecological assumptions underpinning compensatory mitigation of salmon-derived nutrients

    Science.gov (United States)

    Collins, Scott F.; Marcarelli, Amy M.; Baxter, Colden V.; Wipfli, Mark S.

    2015-01-01

    We critically evaluate some of the key ecological assumptions underpinning the use of nutrient replacement as a means of recovering salmon populations and a range of other organisms thought to be linked to productive salmon runs. These assumptions include: (1) nutrient mitigation mimics the ecological roles of salmon, (2) mitigation is needed to replace salmon-derived nutrients and stimulate primary and invertebrate production in streams, and (3) food resources in rearing habitats limit populations of salmon and resident fishes. First, we call into question assumption one because an array of evidence points to the multi-faceted role played by spawning salmon, including disturbance via redd-building, nutrient recycling by live fish, and consumption by terrestrial consumers. Second, we show that assumption two may require qualification based upon a more complete understanding of nutrient cycling and productivity in streams. Third, we evaluate the empirical evidence supporting food limitation of fish populations and conclude it has been only weakly tested. On the basis of this assessment, we urge caution in the application of nutrient mitigation as a management tool. Although applications of nutrients and other materials intended to mitigate for lost or diminished runs of Pacific salmon may trigger ecological responses within treated ecosystems, contributions of these activities toward actual mitigation may be limited.

  6. Size, shape, and appearance of the normal female pituitary gland

    International Nuclear Information System (INIS)

    Wolpert, S.M.; Molitch, M.E.; Goldman, J.A.; Wood, J.B.

    1984-01-01

    One hundred seven women 18-65 years old were studied who were referred for suspected central nervous system disease not related to the pituitary gland or hypothalamus. High-resolution, direct, coronal, contrast-enhanced computed tomography (CT) was used to examine the size; shape, and density of the normal pituitary gland. There were three major conclusions: (1) the height of the normal gland can be as much as 9 mm; (2) the superior margin of the gland may bulge in normal patients; and (3) both large size and convex contour appear to be associated with younger age. It was also found that serum prolactin levels do not appear to correlate with the CT appearances. Noise artifacts inherent in high-detail, thin-section, soft-tissue scanning may be a limiting factor in defining reproducible patterns in different parts of the normal pituitary gland

  7. The Wiedemann—Franz law in a normal metal—superconductor junction

    International Nuclear Information System (INIS)

    Ghanbari R; Rashedi G

    2011-01-01

    In this paper the influence of superconducting correlations on the thermal and charge conductances in a normal metal—superconductor (NS) junction in the clean limit is studied theoretically. First we solve the quasiclassical Eilenberger equations, and using the obtained density of states we can acquire the thermal and electrical conductances for the NS junction. Then we compare the conductance in a normal region of an NS junction with that in a single layer of normal metal (N). Moreover, we study the Wiedemann—Franz (WF) law for these two cases (N and NS). From our calculations we conclude that the behaviour of the NS junction does not conform to the WF law for all temperatures. The effect of the thickness of normal metal on the thermal conductivity is also theoretically investigated in the paper. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  8. ψ -ontology result without the Cartesian product assumption

    Science.gov (United States)

    Myrvold, Wayne C.

    2018-05-01

    We introduce a weakening of the preparation independence postulate of Pusey et al. [Nat. Phys. 8, 475 (2012), 10.1038/nphys2309] that does not presuppose that the space of ontic states resulting from a product-state preparation can be represented by the Cartesian product of subsystem state spaces. On the basis of this weakened assumption, it is shown that, in any model that reproduces the quantum probabilities, any pair of pure quantum states |ψ >,|ϕ > with ≤1 /√{2 } must be ontologically distinct.

  9. The two normalization schemes of factorial moments in high energy collisions and the dependence intermittency degree on average transverse momentum

    International Nuclear Information System (INIS)

    Wu Yuanfnag; Liu Lianshou

    1992-01-01

    The two different normalization scheme of factorial moments are analyzed carefully. It is found that in both the cases of fixed multiplicity and of intermittency independent of multiplicity, the intermittency indexes obtained from these two normalization schemes are equal to each other. In the case of non-fixed multiplicity and intermittency depending on multiplicity, the formulae expressing the intermittency indexes from the two different normalization schemes in terms of the dynamical index are given. The experimentally observed dependency of intermittency degree on transverse momentum cut is fully recovered by means of the assumption that intermittency degree depends on average transverse momentum per event. It confirms importance of the dependency of intermittency on average momentum

  10. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  11. Quantitative measurement of lung density with x-ray CT and positron CT, (2). Diseased subjects

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Kengo; Ito, Masatoshi; Kubota, Kazuo

    1985-05-01

    Lung density was quantitatively measured on six diseased patients with X-ray CT (XCT) and Positron CT(PCT). The findings are as follows: In the silicosis, extravascular lung density was found to be remarkably increased compared to normals (0.29gcm/sup 3/), but blood volume was in normal range. In the post-irradiated lung cancers, extravascular lung density increased in the irradiated sites compared to the non-irradiated opposite sites, and blood volume varied in each case. In a patient with chronic heart failure, blood volume decreased (0.11mlcm/sup 3/) with increased extravascular lung density (0.23gcm/sup 3/). In the chronic obstructive pulmonary disease, both extravascular lung density and blood volume decreased (0.11gcm/sup 3/ and 0.10mlcm/sup 3/ respectively). Lung density measured with XCT was constantly lower than that with PCT in all cases. But changes in the values of lung density measured, correlated well with each other. In conclusion, the method presented here may clarify the etiology of the diffuse pulmonary diseases, and be used to differentiate and grade the diseases.

  12. Using Contemporary Art to Challenge Cultural Values, Beliefs, and Assumptions

    Science.gov (United States)

    Knight, Wanda B.

    2006-01-01

    Art educators, like many other educators born or socialized within the main-stream culture of a society, seldom have an opportunity to identify, question, and challenge their cultural values, beliefs, assumptions, and perspectives because school culture typically reinforces those they learn at home and in their communities (Bush & Simmons, 1990).…

  13. Low density in liver of idiopathic portal hypertension. A computed tomographic observation with possible diagnostic significance

    Energy Technology Data Exchange (ETDEWEB)

    Ishito, Hiroyuki

    1988-01-01

    In order to evaluate the diagnostic value of low density in liver on computed tomography (CT), CT scans of 11 patients with idiopathic portal hypertension (IPH) were compared with those from 22 cirrhotic patients, two patients with scarred liver and 16 normal subjects. Low densities on plain CT scans in patients with IPH were distinctly different from those observed in normal liver. Some of the low densities had irregular shape with unclear margin and were scattered near the liver surface, and others had vessel-like structures with unclear margin and extended as far as near the liver surface. Ten of the 11 patients with IPH had low densities mentioned above, while none of the 22 cirrhotic patients had such low densities. The present results suggest that the presence of low densities in liver on plain CT scan is clinically beneficial in diagnosis of IPH.

  14. Spectroscopic studies (FT-IR, FT-Raman, UV-Visible), normal co-ordinate analysis, first-order hyperpolarizability and HOMO, LUMO studies of 3,4-dichlorobenzophenone by using Density Functional Methods.

    Science.gov (United States)

    Venkata Prasad, K; Samatha, K; Jagadeeswara Rao, D; Santhamma, C; Muthu, S; Mark Heron, B

    2015-01-01

    The vibrational frequencies of 3,4-dichlorobenzophenone (DCLBP) were obtained from the FT-IR and Raman spectral data, and evaluated based on the Density Functional Theory using the standard method B3LYP with 6-311+G(d,p) as the basis set. On the basis of potential energy distribution together with the normal-co-ordinate analysis and following the scaled quantum mechanical force methodology, the assignments for the various frequencies were described. The values of the electric dipole moment (μ) and the first-order hyperpolarizability (β) of the molecule were computed. The UV-absorption spectrum was also recorded to study the electronic transitions. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. The NBO analysis, to study the intramolecular hyperconjugative interactions, was carried out. Mulliken's net charges were evaluated. The MEP and thermodynamic properties were also calculated. The electron density-based local reactivity descriptor, such as Fukui functions, was calculated to explain the chemical selectivity or reactivity site in 3,4-dichlorobenzophenone. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Assumptions and Challenges of Open Scholarship

    Directory of Open Access Journals (Sweden)

    George Veletsianos

    2012-10-01

    Full Text Available Researchers, educators, policymakers, and other education stakeholders hope and anticipate that openness and open scholarship will generate positive outcomes for education and scholarship. Given the emerging nature of open practices, educators and scholars are finding themselves in a position in which they can shape and/or be shaped by openness. The intention of this paper is (a to identify the assumptions of the open scholarship movement and (b to highlight challenges associated with the movement’s aspirations of broadening access to education and knowledge. Through a critique of technology use in education, an understanding of educational technology narratives and their unfulfilled potential, and an appreciation of the negotiated implementation of technology use, we hope that this paper helps spark a conversation for a more critical, equitable, and effective future for education and open scholarship.

  16. Normal versus anomalous self-diffusion in two-dimensional fluids: Memory function approach and generalized asymptotic Einstein relation

    Science.gov (United States)

    Shin, Hyun Kyung; Choi, Bongsik; Talkner, Peter; Lee, Eok Kyun

    2014-12-01

    Based on the generalized Langevin equation for the momentum of a Brownian particle a generalized asymptotic Einstein relation is derived. It agrees with the well-known Einstein relation in the case of normal diffusion but continues to hold for sub- and super-diffusive spreading of the Brownian particle's mean square displacement. The generalized asymptotic Einstein relation is used to analyze data obtained from molecular dynamics simulations of a two-dimensional soft disk fluid. We mainly concentrated on medium densities for which we found super-diffusive behavior of a tagged fluid particle. At higher densities a range of normal diffusion can be identified. The motion presumably changes to sub-diffusion for even higher densities.

  17. World assumptions, posttraumatic stress and quality of life after a natural disaster: A longitudinal study

    Science.gov (United States)

    2012-01-01

    Background Changes in world assumptions are a fundamental concept within theories that explain posttraumatic stress disorder. The objective of the present study was to gain a greater understanding of how changes in world assumptions are related to quality of life and posttraumatic stress symptoms after a natural disaster. Methods A longitudinal study of 574 Norwegian adults who survived the Southeast Asian tsunami in 2004 was undertaken. Multilevel analyses were used to identify which factors at six months post-tsunami predicted quality of life and posttraumatic stress symptoms two years post-tsunami. Results Good quality of life and posttraumatic stress symptoms were negatively related. However, major differences in the predictors of these outcomes were found. Females reported significantly higher quality of life and more posttraumatic stress than men. The association between level of exposure to the tsunami and quality of life seemed to be mediated by posttraumatic stress. Negative perceived changes in the assumption “the world is just” were related to adverse outcome in both quality of life and posttraumatic stress. Positive perceived changes in the assumptions “life is meaningful” and “feeling that I am a valuable human” were associated with higher levels of quality of life but not with posttraumatic stress. Conclusions Quality of life and posttraumatic stress symptoms demonstrate differences in their etiology. World assumptions may be less specifically related to posttraumatic stress than has been postulated in some cognitive theories. PMID:22742447

  18. Lunar occultation of Saturn. II - The normal reflectance of Rhea, Titan, and Iapetus

    Science.gov (United States)

    Elliot, J. L.; Dunham, E. W.; Veverka, J.; Goguen, J.

    1978-01-01

    An inversion procedure to obtain the reflectance of the central region of a satellite's disk from lunar occultation data is presented. The scheme assumes that the limb darkening of the satellite depends only on the radial distance from the center of the disk. Given this assumption, normal reflectances can be derived that are essentially independent of the limb darkening and the diameter of the satellite. The procedure has been applied to our observations of the March 1974 lunar occultation of Tethys, Dione, Rhea, Titan, and Iapetus. In the V passband we derive the following normal reflectances: Rhea (0.97 plus or minus 0.20), Titan (0.24 plus or minus 0.03), Iapetus, bright face (0.60 plus or minus 0.14). For Tethys and Dione the values derived have large uncertainties, but are consistent with our result for Rhea.

  19. Bone mineral density in children with Down's syndrome detected by dual photon absorptiometry

    International Nuclear Information System (INIS)

    Kao, C.H.; Chen, C.C.; Wang, S.J.; Yeh, S.H.

    1992-01-01

    Bone mineral density (BMD) in ten children with Down's syndrome (seven boys, three girls; aged 10-16 years) was measured by dual photon absorptiometry (DPA) using an M and SE Osteo Tech 300 scanner. The BMD of the 2nd to 4th lumbar vertebrae was measured and the mean density presented as g cm -2 . The BMD of Down's syndrome was compared with the BMD of normal Chinese children of the same age group. The results showed that the BMD in Down's syndrome was significantly lower compared to that found in normal children. The percentage of decreased BMD is 8.47 ± 2.69% (mean ± 1 S.E.M.) in Down's syndrome compared to normal children of the same age group. The distribution curve of BMD against ages in Down's syndrome has a delay of 2.3 ± 0.5 (mean ± 1 S.E.M.) years compared to normal children. In our conclusion, the children with Down's syndrome have lower BMD than the normal children of the same age group. (Author)

  20. Oil production, oil prices, and macroeconomic adjustment under different wage assumptions

    International Nuclear Information System (INIS)

    Harvie, C.; Maleka, P.T.

    1992-01-01

    In a previous paper one of the authors developed a simple model to try to identify the possible macroeconomic adjustment processes arising in an economy experiencing a temporary period of oil production, under alternative wage adjustment assumptions, namely nominal and real wage rigidity. Certain assumptions were made regarding the characteristics of actual production, the permanent revenues generated from that oil production, and the net exports/imports of oil. The role of the price of oil, and possible changes in that price was essentially ignored. Here we attempt to incorporate the price of oil, as well as changes in that price, in conjunction with the production of oil, the objective being to identify the contribution which the price of oil, and changes in it, make to the adjustment process itself. The emphasis in this paper is not given to a mathematical derivation and analysis of the model's dynamics of adjustment or its comparative statics, but rather to the derivation of simulation results from the model, for a specific assumed case, using a numerical algorithm program, conducive to the type of theoretical framework utilized here. The results presented suggest that although the adjustment profiles of the macroeconomic variables of interest, for either wage adjustment assumption, remain fundamentally the same, the magnitude of these adjustments is increased. Hence to derive a more accurate picture of the dimensions of adjustment of these macroeconomic variables, it is essential to include the price of oil as well as changes in that price. (Author)

  1. Improved Topographic Normalization for Landsat TM Images by Introducing the MODIS Surface BRDF

    Directory of Open Access Journals (Sweden)

    Yanli Zhang

    2015-05-01

    Full Text Available In rugged terrain, the accuracy of surface reflectance estimations is compromised by atmospheric and topographic effects. We propose a new method to simultaneously eliminate atmospheric and terrain effects in Landsat Thematic Mapper (TM images based on a 30 m digital elevation model (DEM and Moderate Resolution Imaging Spectroradiometer (MODIS atmospheric products. Moreover, we define a normalized factor of a Bidirectional Reflectance Distribution Function (BRDF to convert the sloping pixel reflectance into a flat pixel reflectance by using the Ross Thick-Li Sparse BRDF model (Ambrals algorithm and MODIS BRDF/albedo kernel coefficient products. Sole atmospheric correction and topographic normalization were performed for TM images in the upper stream of the Heihe River Basin. The results show that using MODIS atmospheric products can effectively remove atmospheric effects compared with the Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH model and the Landsat Climate Data Record (CDR. Moreover, superior topographic effect removal can be achieved by considering the surface BRDF when compared with the surface Lambertian assumption of topographic normalization.

  2. Packing Density Approach for Sustainable Development of Concrete

    Directory of Open Access Journals (Sweden)

    Sudarshan Dattatraya KORE

    2017-12-01

    Full Text Available This paper deals with the details of optimized mix design for normal strength concrete using particle packing density method. Also the concrete mixes were designed as per BIS: 10262-2009. Different water-cement ratios were used and kept same in both design methods. An attempt has been made to obtain sustainable and cost effective concrete product by use of particle packing density method. The parameters such as workability, compressive strength, cost analysis and carbon di oxide emission were discussed. The results of the study showed that, the compressive strength of the concrete produced by packing density method are closer to that of design compressive strength of BIS code method. By adopting the packing density method for design of concrete mixes, resulted in 11% cost saving with 12% reduction in carbon di oxide emission.

  3. Effect of crop density on competition by wheat and barley with Agrostemma githago and other weeds

    DEFF Research Database (Denmark)

    Doll, H.; Holm, U.; Søgaard, B.

    1995-01-01

    The effect of Agrostemma githago L. and other naturally occurring weeds on biomass production and grain yield was studied in winter wheat and winter barley. Naturally occurring weeds had only a negligible effect on barley, but reduced wheat grain yield by 10% at a quarter of normal crop density....... The interaction between the cereals and A. githago was studied in additive series employing different crop densities. Growth of this weed species was strongly dependent on crop density, which was more important for controlling weed growth than it was for obtaining a normal grain yield. Wheat and especially barley...

  4. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  5. Spatial Angular Compounding for Elastography without the Incompressibility Assumption

    OpenAIRE

    Rao, Min; Varghese, Tomy

    2005-01-01

    Spatial-angular compounding is a new technique that enables the reduction of noise artifacts in ultrasound elastography. Previous results using spatial angular compounding, however, were based on the use of the tissue incompressibility assumption. Compounded elastograms were obtained from a spatially-weighted average of local strain estimated from radiofrequency echo signals acquired at different insonification angles. In this paper, we present a new method for reducing the noise artifacts in...

  6. Dosimetric precision requirements and quantities for characterizing the response of tumors and normal tissues

    Energy Technology Data Exchange (ETDEWEB)

    Brahme, A [Karolinska Inst., Stockholm (Sweden). Dept. of Radiation Physics

    1996-08-01

    Based on simple radiobiological models the effect of the distribution of absorbed dose in therapy beams on the radiation response of tumor and normal tissue volumes are investigated. Under the assumption that the dose variation in the treated volume is small it is shown that the response of the tissue to radiation is determined mainly by the mean dose to the tumor or normal tissue volume in question. Quantitative expressions are also given for the increased probability of normal tissue complications and the decreased probability of tumor control as a function of increasing dose variations around the mean dose level to these tissues. When the dose variations are large the minimum tumor dose (to cm{sup 3} size volumes) will generally be better related to tumor control and the highest dose to significant portions of normal tissue correlates best to complications. In order not to lose more than one out of 20 curable patients (95% of highest possible treatment outcome) the required accuracy in the dose distribution delivered to the target volume should be 2.5% (1{sigma}) for a mean dose response gradient {gamma} in the range 2 - 3. For more steeply responding tumors and normal tissues even stricter requirements may be desirable. (author). 15 refs, 6 figs.

  7. The Effect of Obesity onBone Mineral Density in Primary Fibromyalgia Cases - Original Investigation

    Directory of Open Access Journals (Sweden)

    Bahadır Yesevi

    2005-12-01

    Full Text Available Fibromyalgia is a chronic musculoskeletal disease, characterized by tender points in various areas at body and widespread pain musculoskeletal system and unknown etiology, in which metabolic, immunologic and neuroendocrin abnormalities are seen. In this study, 45 female patients were enrolled according to 1990 ACR fibromyalgia criteria. They were divided to 3 groups, with 15 patients; normal, preobese and obese, depending to the body mass index. They were tested for bone mineral density of the lomber spine and femur, using dual energy x-ray absorptionmeter. The depression presence was investigated by Hamilton Depression Scale. The bone mineral density of L1-4 region of fibromyalgic normal body weight patients were normal range and there was no significant statistical difference between others groups. In contrast, femur bone mineral density vaules were found to be statistically significantly osteopenic, as compared with obese groups. There was a negative statistical correlation between depression and lomber area bone mineral density. Whereas in femur it was seen that bone mineral density was protected in preobese and obese fibromyalgia patients. The number of studies on this subject is not sufficient. Also the number of patients determined on current studies are low. Further studies, with langer patient numbers and more detailed protocols are needed. (Osteoporoz Dünyasından 2005; 4: 148-150

  8. Electron mobility in supercritical pentanes as a function of density and temperature

    International Nuclear Information System (INIS)

    Itoh, Kengo; Nakagawa, Kazumichi; Nishikawa, Masaru

    1988-01-01

    The excess electron mobility in supercritical n-, iso- and neopentane was measured isothermally as a function of density. The density-normalized mobility μN in all three isomers goes through a minimum at a density below the respective critical densities, and the mobility is quite temperature-dependent in this region, then goes through a minimum. The μN behavior around the minimum in n-pentane is well accounted for by the Cohen-Lekner model with the structure factor S(K) estimated from the speed of sound, while that in iso- and neopentane is not. (author)

  9. Normal Contacts of Lubricated Fractal Rough Surfaces at the Atomic Scale

    NARCIS (Netherlands)

    Solhjoo, Soheil; Vakis, Antonis I.

    The friction of contacting interfaces is a function of surface roughness and applied normal load. Under boundary lubrication, this frictional behavior changes as a function of lubricant wettability, viscosity, and density, by practically decreasing the possibility of dry contact. Many studies on

  10. Determining the Local Dark Matter Density with SDSS G-dwarf data

    Science.gov (United States)

    Silverwood, Hamish; Sivertsson, Sofia; Read, Justin; Bertone, Gianfranco; Steger, Pascal

    2018-04-01

    We present a determination of the local dark matter density derived using the integrated Jeans equation method presented in Silverwood et al. (2016) applied to SDSS-SEGUE G-dwarf data processed by Büdenbender et al. (2015). For our analysis we construct models for the tracer density, dark matter and baryon distribution, and tilt term (linking radial and vertical motions), and then calculate the vertical velocity dispersion using the integrated Jeans equation. These models are then fit to the data using MultiNest, and a posterior distribution for the local dark matter density is derived. We find the most reliable determination to come from the α-young population presented in Büdenbender et al. (2015), yielding a result of ρDM = 0.46+0.07 -0.09 GeV cm-3 = 0.012+0.001 -0.002 M⊙ pc-3. Our results also illuminate the path ahead for future analyses using Gaia DR2 data, highlighting which quantities will need to be determined and which assumptions could be relaxed.

  11. Postfragmentation density function for bacterial aggregates in laminar flow.

    Science.gov (United States)

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  12. Reference Priors for the General Location-Scale Model

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  13. First assumptions and overlooking competing causes of death

    DEFF Research Database (Denmark)

    Leth, Peter Mygind; Andersen, Anh Thao Nguyen

    2014-01-01

    Determining the most probable cause of death is important, and it is sometimes tempting to assume an obvious cause of death, when it readily presents itself, and stop looking for other competing causes of death. The case story presented in the article illustrates this dilemma. The first assumption...... of cause of death, which was based on results from bacteriology tests, proved to be wrong when the results from the forensic toxicology testing became available. This case also illustrates how post mortem computed tomography (PMCT) findings of radio opaque material in the stomach alerted the pathologist...

  14. Radiation hormesis and the linear-no-threshold assumption

    CERN Document Server

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  15. Size-density relations in dark clouds: Non-LTE effects

    International Nuclear Information System (INIS)

    Maloney, P.

    1986-01-01

    One of the major goals of molecular astronomy has been to understand the physics and dynamics of dense interstellar clouds. Because the interpretation of observations of giant molecular clouds is complicated by their very complex structure and the dynamical effects of star formation, a number of studies have concentrated on dark clouds. Leung, Kutner and Mead (1982) (hereafter LKM) and Myers (1983), in studies of CO and NH 3 emission, concluded that dark clouds exhibit significant correlations between linewidth and cloud radius of the form delta v varies as R(0.5) and between mean density and radius of the form n varies as R(-1), as originally suggested by Larson (1981). This result suggests that these objects are in virial equilibrium. However, the mean densities inferred from the CO data of LKM are based on an local thermodynamic equilibrium (LTE) analysis of their 13CO data. At the very low mean densities inferred by LKM for the larger clouds in their samples, the assumption of LTE becomes very questionable. As most of the range in R in the density-size correlation comes from the clouds observed in CO, it seems worthwhile to examine how non-LTE effects will influence the derived densities. Microturbulent models of inhomogeneous clouds of varying central concentration with the linewidth-size and mean density-size relations found by Myers show sub-thermal excitation of the 13CO line in the larger clouds, with the result that LTE analysis considerbly underestimates the actual column density. A more general approach which doesn't require detailed modeling of the clouds is to consider whether the observed T/sub R/*(13CO)/T/sub R/*(12CO) ratios in the clouds studied by LKM are in the range where the LTE-derived optical depths be seriously in error due to sub-thermal excitation of the 13CO molecule

  16. Electron scattering by nuclei and transition charge densities

    International Nuclear Information System (INIS)

    Gul'karov, I.S.

    1988-01-01

    Transition charge densities for states of electric type, for nuclei with A≤40--50 as obtained from data on inelastic electron scattering, are studied. The formalism of electroexcitation of nuclei is considered, together with various models (macroscopic and microscopic) used to calculate form factors, transition charge densities, and the moments of these densities: B(Eλ) and R/sub λ/ . The macroscopic models are derived microscopically, and it is shown that the model-independent sum rules lead to the same transition densities as calculations based on various hydrodynamic models. The sum rules with and without allowance for the Skyrme exchange interaction are discussed. The results of the calculations are compared with the experimental form factors of electron scattering by nuclei from 12 C to 48 Ca with excitation in them of normal-parity states with I/sup π/ = 0 + , 1 - , 2 + , 3 - , 4 + , 5 - and T = 0. The model-independent transition charge densities for the weakly collectivized excitations differ strongly from the model-dependent densities. The influence of neutrons on the transition charge densities of the nuclear isotopes 16 /sup ,/ 18 O, 32 /sup ,/ 34 S, and 40 /sup ,/ 48 Ca is considered

  17. Design Considerations and Validation of Tenth Value Layer Used for a Medical Linear Accelerator Bunker Using High Density Concrete

    International Nuclear Information System (INIS)

    Peet, Deborah; Horton, Patrick; Jones, Matthew; Ramsdale, Malcolm

    2006-01-01

    A bunker for the containment and medical use of 10 MV and 6 MV X-rays from a linear accelerator was designed to be added on to four existing bunkers. Space was limited and the walls of the bunker were built using Magnadense, a high density aggregate mined in Sweden and imported into the UK by Minelco Minerals Ltd. The density was specified by the user to be a minimum of 3800 kg/m 3 . This reduced the thickness of primary and secondary shielding over that required using standard concrete. Standard concrete (density 2350 kg/m 3 ) was used for the roof of the bunker. No published data for the tenth value layer (T.V.L.) of the high density concrete were available and values of T.V.L. were derived from those for standard concrete using the ratio of density. Calculations of wall thickness along established principles using normal assumptions and dose constraints resulted in a design with minimum primary wall barriers of 1500 mm and secondary barriers of between 800 mm and 1000 mm of high density concrete. Following construction, measurements were made of the dose rates outside the shielding thereby allowing estimates of the T.V.L. of the material for 6 and 10 MV X-rays. The instantaneous dose rates outside the primary barrier walls were calculated to be less than 6 x 10 -6 Sv/hr but on measurement were found to be more than a factor of 4 times lower than this. Calculations were reviewed and the T.V.L. was found to be 12% greater than that required to achieve the measured dose rate. On the roof, the instantaneous dose rate at the primary barrier was measured to be within 3% of that predicted using the published values of T.V.L. for standard concrete. Sample cubes of standard and high density concrete poured during construction showed that the density of the standard concrete in the roof was close to that used in the design whereas the physical density of Magnadense concrete was on average 5% higher than that specified. In conclusion, values of T.V.L. for the high density

  18. High density dispersion fuel

    International Nuclear Information System (INIS)

    Hofman, G.L.

    1996-01-01

    A fuel development campaign that results in an aluminum plate-type fuel of unlimited LEU burnup capability with an uranium loading of 9 grams per cm 3 of meat should be considered an unqualified success. The current worldwide approved and accepted highest loading is 4.8 g cm -3 with U 3 Si 2 as fuel. High-density uranium compounds offer no real density advantage over U 3 Si 2 and have less desirable fabrication and performance characteristics as well. Of the higher-density compounds, U 3 Si has approximately a 30% higher uranium density but the density of the U 6 X compounds would yield the factor 1.5 needed to achieve 9 g cm -3 uranium loading. Unfortunately, irradiation tests proved these peritectic compounds have poor swelling behavior. It is for this reason that the authors are turning to uranium alloys. The reason pure uranium was not seriously considered as a dispersion fuel is mainly due to its high rate of growth and swelling at low temperatures. This problem was solved at least for relatively low burnup application in non-dispersion fuel elements with small additions of Si, Fe, and Al. This so called adjusted uranium has nearly the same density as pure α-uranium and it seems prudent to reconsider this alloy as a dispersant. Further modifications of uranium metal to achieve higher burnup swelling stability involve stabilization of the cubic γ phase at low temperatures where normally α phase exists. Several low neutron capture cross section elements such as Zr, Nb, Ti and Mo accomplish this in various degrees. The challenge is to produce a suitable form of fuel powder and develop a plate fabrication procedure, as well as obtain high burnup capability through irradiation testing

  19. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    Science.gov (United States)

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  20. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    Science.gov (United States)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative

  1. Consequences of Violated Equating Assumptions under the Equivalent Groups Design

    Science.gov (United States)

    Lyren, Per-Erik; Hambleton, Ronald K.

    2011-01-01

    The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…

  2. Effect of normal impurities on anisotropic superconductors with variable density of states

    Science.gov (United States)

    Whitmore, M. D.; Carbotte, J. P.

    1982-06-01

    We develop a generalized BCS theory of impure superconductors with an anisotropic electron-electron interaction represented by the factorizable model introduced by Markowitz and Kadanoff, and a variable electronic density of states N(ɛ), assumed to peak at the Fermi energy, which is modeled by a Lorentzian superimposed on a uniform background. As the impurity scattering is increased, the enhancement of T c by both the anisotropy and the peak in N(ɛ) is washed out. The reduction is investigated for different values of the anisotropy and different peak heights and widths. It is concluded that the effects of anisotropy and the peak are reduced together in such a way that any effect due to anisotropy is not easily distinguishable from that due to the peak.

  3. Associations Between Changes in Normal Personality Traits and Borderline Personality Disorder Symptoms over 16 years

    Science.gov (United States)

    Wright, Aidan G.C.; Hopwood, Christopher J.; Zanarini, Mary C.

    2014-01-01

    There has been significant movement toward conceptualizing borderline personality disorder (BPD) with normal personality traits. However one critical assumption underlying this transition, that longitudinal trajectories of BPD symptoms and normal traits track together, has not been tested. We evaluated the prospective longitudinal associations of changes in five-factor model traits and BPD symptoms over the course of 16 years using parallel process latent growth curve models in 362 patients with BPD (N=290) or other PDs (N=72). Moderate to strong cross-sectional and longitudinal associations were observed between BPD symptoms and Neuroticism, Extraversion, Agreeableness, and Conscientiousness. This study is the first to demonstrate a longitudinal link between changes in BPD symptoms and changes in traits over an extended interval in a clinical sample. These findings imply that changes in BPD symptoms occur in concert with changes in normal traits, and support the proposed transition to conceptualizing BPD, at least in part, with trait dimensions. PMID:25364942

  4. Basic concepts and assumptions behind the new ICRP recommendations

    International Nuclear Information System (INIS)

    Lindell, B.

    1979-01-01

    A review is given of some of the basic concepts and assumptions behind the current recommendations by the International Commission on Radiological Protection in ICRP Publications 26 and 28, which form the basis for the revision of the Basic Safety Standards jointly undertaken by IAEA, ILO, NEA and WHO. Special attention is given to the assumption of a linear, non-threshold dose-response relationship for stochastic radiation effects such as cancer and hereditary harm. The three basic principles of protection are discussed: justification of practice, optimization of protection and individual risk limitation. In the new ICRP recommendations particular emphasis is given to the principle of keeping all radiation doses as low as is reasonably achievable. A consequence of this is that the ICRP dose limits are now given as boundary conditions for the justification and optimization procedures rather than as values that should be used for purposes of planning and design. The fractional increase in total risk at various ages after continuous exposure near the dose limits is given as an illustration. The need for taking other sources, present and future, into account when applying the dose limits leads to the use of the commitment concept. This is briefly discussed as well as the new quantity, the effective dose equivalent, introduced by ICRP. (author)

  5. Propulsion Physics Under the Changing Density Field Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  6. Has the "Equal Environments" assumption been tested in twin studies?

    Science.gov (United States)

    Eaves, Lindon; Foley, Debra; Silberg, Judy

    2003-12-01

    A recurring criticism of the twin method for quantifying genetic and environmental components of human differences is the necessity of the so-called "equal environments assumption" (EEA) (i.e., that monozygotic and dizygotic twins experience equally correlated environments). It has been proposed to test the EEA by stratifying twin correlations by indices of the amount of shared environment. However, relevant environments may also be influenced by genetic differences. We present a model for the role of genetic factors in niche selection by twins that may account for variation in indices of the shared twin environment (e.g., contact between members of twin pairs). Simulations reveal that stratification of twin correlations by amount of contact can yield spurious evidence of large shared environmental effects in some strata and even give false indications of genotype x environment interaction. The stratification approach to testing the equal environments assumption may be misleading and the results of such tests may actually be consistent with a simpler theory of the role of genetic factors in niche selection.

  7. Bell violation using entangled photons without the fair-sampling assumption.

    Science.gov (United States)

    Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton

    2013-05-09

    The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

  8. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  9. Representation and validation of liquid densities for pure compounds and mixtures

    DEFF Research Database (Denmark)

    Diky, Vladimir; O'Connell, John P.; Abildskov, Jens

    2015-01-01

    Reliable correlation and prediction of liquid densities are important for designing chemical processes at normal and elevated pressures. A corresponding-states model from molecular theory was extended to yield a robust method for quality testing of experimental data that also provides predicted...... values at unmeasured conditions. The model has been shown to successfully represent and validate the pressure and temperature dependence of liquid densities greater than 1.5 of the critical density for pure compounds, binary mixtures, and ternary mixtures from the triple to critical temperatures...

  10. Size-density relations in dark clouds: Non-LTE effects

    Science.gov (United States)

    Maloney, P.

    1986-01-01

    One of the major goals of molecular astronomy has been to understand the physics and dynamics of dense interstellar clouds. Because the interpretation of observations of giant molecular clouds is complicated by their very complex structure and the dynamical effects of star formation, a number of studies have concentrated on dark clouds. Leung, Kutner and Mead (1982) (hereafter LKM) and Myers (1983), in studies of CO and NH3 emission, concluded that dark clouds exhibit significant correlations between linewidth and cloud radius of the form delta v varies as R(0.5) and between mean density and radius of the form n varies as R(-1), as originally suggested by Larson (1981). This result suggests that these objects are in virial equilibrium. However, the mean densities inferred from the CO data of LKM are based on an local thermodynamic equilibrium (LTE) analysis of their 13CO data. At the very low mean densities inferred by LKM for the larger clouds in their samples, the assumption of LTE becomes very questionable. As most of the range in R in the density-size correlation comes from the clouds observed in CO, it seems worthwhile to examine how non-LTE effects will influence the derived densities. One way to assess the validity of LTE-derived densities is to construct cloud models and then to interpret them in the same way as the observed data. Microturbulent models of inhomogeneous clouds of varying central concentration with the linewidth-size and mean density-size relations found by Myers show sub-thermal excitation of the 13CO line in the larger clouds, with the result that LTE analysis considerbly underestimates the actual column density. A more general approach which doesn't require detailed modeling of the clouds is to consider whether the observed T sub R*(13CO)/T sub R*(12CO) ratios in the clouds studied by LKM are in the range where the LTE-derived optical depths (and hence column densities) can be seriously in error due to sub-thermal excitation of the 13CO

  11. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  12. Magnetic flux density in the heliosphere through several solar cycles

    Energy Technology Data Exchange (ETDEWEB)

    Erdős, G. [Wigner Research Centre for Physics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest (Hungary); Balogh, A., E-mail: erdos.geza@wigner.mta.hu [The Blackett Laboratory, Imperial College London, London SW7 2BZ (United Kingdom)

    2014-01-20

    We studied the magnetic flux density carried by solar wind to various locations in the heliosphere, covering a heliospheric distance range of 0.3-5.4 AU and a heliolatitudinal range from 80° south to 80° north. Distributions of the radial component of the magnetic field, B{sub R} , were determined over long intervals from the Helios, ACE, STEREO, and Ulysses missions, as well as from using the 1 AU OMNI data set. We show that at larger distances from the Sun, the fluctuations of the magnetic field around the average Parker field line distort the distribution of B{sub R} to such an extent that the determination of the unsigned, open solar magnetic flux density from the average (|B{sub R} |) is no longer justified. We analyze in detail two methods for reducing the effect of fluctuations. The two methods are tested using magnetic field and plasma velocity measurements in the OMNI database and in the Ulysses observations, normalized to 1 AU. It is shown that without such corrections for the fluctuations, the magnetic flux density measured by Ulysses around the aphelion phase of the orbit is significantly overestimated. However, the matching between the in-ecliptic magnetic flux density at 1 AU (OMNI data) and the off-ecliptic, more distant, normalized flux density by Ulysses is remarkably good if corrections are made for the fluctuations using either method. The main finding of the analysis is that the magnetic flux density in the heliosphere is fairly uniform, with no significant variations having been observed either in heliocentric distance or heliographic latitude.

  13. Calculation of the spectrum of {gamma} rays connecting superdeformed and normally deformed nuclear states

    Energy Technology Data Exchange (ETDEWEB)

    Dossing, T.; Khoo, T.L.; Lauritsen, T. [and others

    1995-08-01

    The decay out of superdeformed states occurs by coupling to compound nuclear states of normal deformation. The coupling is very weak, resulting in mixing of the SD state with one or two normal compound states. With a high energy available for decay, a statistical spectrum ensues. The shape of this statistical spectrum contains information on the level densities of the excited states below the SD level. The level densities are sensitively affected by the pair correlations. Thus decay-out of a SD state (which presents us with a means to start a statistical cascade from a highly-excited sharp state) provides a method for investigating the reduction of pairing with increasing thermal excitation energy.

  14. A Patch Density Recommendation based on Convergence Studies for Vehicle Panel Vibration Response resulting from Excitation by a Diffuse Acoustic Field

    Science.gov (United States)

    Smith, Andrew; LaVerde, Bruce; Jones, Douglas; Towner, Robert; Waldon, James; Hunt, Ron

    2013-01-01

    Producing fluid structural interaction estimates of panel vibration from an applied pressure field excitation are quite dependent on the spatial correlation of the pressure field. There is a danger of either over estimating a low frequency response or under predicting broad band panel response in the more modally dense bands if the pressure field spatial correlation is not accounted for adequately. It is a useful practice to simulate the spatial correlation of the applied pressure field over a 2d surface using a matrix of small patch area regions on a finite element model (FEM). Use of a fitted function for the spatial correlation between patch centers can result in an error if the choice of patch density is not fine enough to represent the more continuous spatial correlation function throughout the intended frequency range of interest. Several patch density assumptions to approximate the fitted spatial correlation function are first evaluated using both qualitative and quantitative illustrations. The actual response of a typical vehicle panel system FEM is then examined in a convergence study where the patch density assumptions are varied over the same model. The convergence study results illustrate the impacts possible from a poor choice of patch density on the analytical response estimate. The fitted correlation function used in this study represents a diffuse acoustic field (DAF) excitation of the panel to produce vibration response.

  15. Normal Isocurvature Surfaces and Special Isocurvature Circles (SIC)

    Science.gov (United States)

    Manoussakis, Gerassimos; Delikaraoglou, Demitris

    2010-05-01

    An isocurvature surface of a gravity field is a surface on which the value of the plumblines' curvature is constant. Here we are going to study the isocurvature surfaces of the Earth's normal gravity field. The normal gravity field is a symmetric gravity field therefore the isocurvature surfaces are surfaces of revolution. But even in this case the necessary relations for their study are not simple at all. Therefore to study an isocurvature surface we make special assumptions to form a vector equation which will hold only for a small coordinate patch of the isocurvature surface. Yet from the definition of the isocurvature surface and the properties of the normal gravity field is possible to express very interesting global geometrical properties of these surfaces without mixing surface differential calculus. The gradient of the plumblines' curvature function is vertical to an isocurvature surface. If P is a point of an isocurvature surface and "Φ" is the angle of the gradient of the plumblines' curvature with the equatorial plane then this direction points to the direction along which the curvature of the plumbline decreases / increases the most, and therefore is related to the strength of the normal gravity field. We will show that this direction is constant along a line of curvature of the isocurvature surface and this line is an isocurvature circle. In addition we will show that at each isocurvature surface there is at least one isocurvature circle along which the direction of the maximum variation of the plumblines' curvature function is parallel to the equatorial plane of the ellipsoid of revolution. This circle is defined as a Special Isocurvature Circle (SIC). Finally we shall prove that all these SIC lye on a special surface of revolution, the so - called SIC surface. That is to say, a SIC is not an isolated curve in the three dimensional space.

  16. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  17. THE ELECTRON DENSITY IN EXPLOSIVE TRANSITION REGION EVENTS OBSERVED BY IRIS

    Energy Technology Data Exchange (ETDEWEB)

    Doschek, G. A.; Warren, H. P. [Space Science Division, Naval Research Laboratory, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Young, P. R. [College of Science, George Mason University, 4400 University Drive, Fairfax, VA 22030 (United States)

    2016-11-20

    We discuss the intensity ratio of the O iv line at 1401.16 Å to the Si iv line at 1402.77 Å in Interface Region Imaging Spectrograph ( IRIS ) spectra. This intensity ratio is important if it can be used to measure high electron densities that cannot be measured using line intensity ratios of two different O iv lines from the multiplet within the IRIS wavelength range. Our discussion is in terms of considerably earlier observations made from the Skylab manned space station and other spectrometers on orbiting spacecraft. The earlier data on the O iv and Si iv ratio and other intersystem line ratios not available to IRIS are complementary to IRIS data. In this paper, we adopt a simple interpretation based on electron density. We adopt a set of assumptions and calculate the electron density as a function of velocity in the Si iv line profiles of two explosive events. At zero velocity the densities are about 2–3 × 10{sup 11} cm{sup -3}, and near 200 km s{sup -1} outflow speed the densities are about 10{sup 12} cm{sup -3}. The densities increase with outflow speed up to about 150 km s{sup -1} after which they level off. Because of the difference in the temperature of formation of the two lines and other possible effects such as non-ionization equilibrium, these density measurements do not have the precision that would be available if there were some additional lines near the formation temperature of O iv.

  18. Low bone mass density is associated with hemolysis in brazilian patients with sickle cell disease

    Directory of Open Access Journals (Sweden)

    Gabriel Baldanzi

    2011-01-01

    Full Text Available OBJECTIVES: To determine whether kidney disease and hemolysis are associated with bone mass density in a population of adult Brazilian patients with sickle cell disease. INTRODUCTION: Bone involvement is a frequent clinical manifestation of sickle cell disease, and it has multiple causes; however, there are few consistent clinical associations between bone involvement and sickle cell disease. METHODS: Patients over 20 years of age with sickle cell disease who were regularly followed at the Hematology and Hemotherapy Center of Campinas, Brazil, were sorted into three groups, including those with normal bone mass density, those with osteopenia, and those with osteoporosis, according to the World Health Organization criteria. The clinical data of the patients were compared using statistical analyses. RESULTS: In total, 65 patients were included in this study: 12 (18.5% with normal bone mass density, 37 (57% with osteopenia and 16 (24.5% with osteoporosis. Overall, 53 patients (81.5% had bone mass densities below normal standards. Osteopenia and osteoporosis patients had increased lactate dehydrogenase levels and reticulocyte counts compared to patients with normal bone mass density (p<0.05. Osteoporosis patients also had decreased hemoglobin levels (p<0.05. Hemolysis was significantly increased in patients with osteoporosis compared with patients with osteopenia, as indicated by increased lactate dehydrogenase levels and reticulocyte counts as well as decreased hemoglobin levels. Osteoporosis patients were older, with lower glomerular filtration rates than patients with osteopenia. There was no significant difference between the groups with regard to gender, body mass index, serum creatinine levels, estimated creatinine clearance, or microalbuminuria. CONCLUSION: A high prevalence of reduced bone mass density that was associated with hemolysis was found in this population, as indicated by the high lactate dehydrogenase levels, increased

  19. Comparison of serum lipid profiles between normal controls and breast cancer patients

    Directory of Open Access Journals (Sweden)

    Pikul Laisupasin

    2013-01-01

    Full Text Available Background: Researchers have reported association of plasma/serum lipids and lipoproteins with different cancers. Increase levels of circulating lipids and lipoproteins have been associated with breast cancer risk. Aim: The aim of this study is to compare serum lipid profiles: total-cholesterol (T-CHOL, triglyceride (TG, high density lipoprotein-cholesterol (HDL-C, low density lipoprotein-cholesterol (LDL-C and very low density lipoprotein-cholesterol (VLDL-C between breast cancer patients and normal participants. Materials and Methods: A total of 403 women in this study were divided into two groups in the period during May 2006-April 2007. Blood samples were collected from 249 patients with early stage breast cancer and 154 normal controls for serum lipid profiles (T-CHOL, TG, HDL-C, LDL-C and VLDL-C analysis using Hitachi 717 Autoanalyzer (Roche Diagnostic GmbH, Germany. TG, LDL-C and VLDL-C levels in breast cancer group were significantly increased as compared with normal controls group (P < 0.001, whereas HDL-C and T-CHOL levels were not. Results: The results of this study suggest that increased serum lipid profiles may associate with breast cancer risk in Thai women. Further studies to group important factors including, cancer stages, types of cancer, parity, and menopausal status that may affect to lipid profiles in breast cancer patients along with an investigation of new lipid profiles to clarify most lipid factors that may involve in breast cancer development are needed.

  20. Options and pitfalls of normal tissues complication probability models

    International Nuclear Information System (INIS)

    Dorr, Wolfgang

    2011-01-01

    Full text: Technological improvements in the physical administration of radiotherapy have led to increasing conformation of the treatment volume (TV) with the planning target volume (PTV) and of the irradiated volume (IV) with the TV. In this process of improvement of the physical quality of radiotherapy, the total volumes of organs at risk exposed to significant doses have significantly decreased, resulting in increased inhomogeneities in the dose distributions within these organs. This has resulted in a need to identify and quantify volume effects in different normal tissues. Today, irradiated volume today must be considered a 6t h 'R' of radiotherapy, in addition to the 5 'Rs' defined by Withers and Steel in the mid/end 1980 s. The current status of knowledge of these volume effects has recently been summarized for many organs and tissues by the QUANTEC (Quantitative Analysis of Normal Tissue Effects in the Clinic) initiative [Int. J. Radiat. Oncol. BioI. Phys. 76 (3) Suppl., 2010]. However, the concept of using dose-volume histogram parameters as a basis for dose constraints, even without applying any models for normal tissue complication probabilities (NTCP), is based on (some) assumptions that are not met in clinical routine treatment planning. First, and most important, dose-volume histogram (DVH) parameters are usually derived from a single, 'snap-shot' CT-scan, without considering physiological (urinary bladder, intestine) or radiation induced (edema, patient weight loss) changes during radiotherapy. Also, individual variations, or different institutional strategies of delineating organs at risk are rarely considered. Moreover, the reduction of the 3-dimentional dose distribution into a '2dimensl' DVH parameter implies that the localization of the dose within an organ is irrelevant-there are ample examples that this assumption is not justified. Routinely used dose constraints also do not take into account that the residual function of an organ may be