WorldWideScience

Sample records for iii likelihood analysis

  1. Unbinned likelihood analysis of EGRET observations

    International Nuclear Information System (INIS)

    Digel, Seth W.

    2000-01-01

    We present a newly-developed likelihood analysis method for EGRET data that defines the likelihood function without binning the photon data or averaging the instrumental response functions. The standard likelihood analysis applied to EGRET data requires the photons to be binned spatially and in energy, and the point-spread functions to be averaged over energy and inclination angle. The full-width half maximum of the point-spread function increases by about 40% from on-axis to 30 degree sign inclination, and depending on the binning in energy can vary by more than that in a single energy bin. The new unbinned method avoids the loss of information that binning and averaging cause and can properly analyze regions where EGRET viewing periods overlap and photons with different inclination angles would otherwise be combined in the same bin. In the poster, we describe the unbinned analysis method and compare its sensitivity with binned analysis for detecting point sources in EGRET data

  2. Phylogenetic analysis using parsimony and likelihood methods.

    Science.gov (United States)

    Yang, Z

    1996-02-01

    The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were

  3. Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    Hehn, L. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Armengaud, E.; Boissiere, T. de; Gros, M.; Navick, X.F.; Nones, C.; Paul, B. [CEA Saclay, DSM/IRFU, Gif-sur-Yvette Cedex (France); Arnaud, Q. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Queen' s University, Kingston (Canada); Augier, C.; Billard, J.; Cazes, A.; Charlieux, F.; Jesus, M. de; Gascon, J.; Juillard, A.; Queguiner, E.; Sanglard, V.; Vagneron, L. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Benoit, A.; Camus, P. [Institut Neel, CNRS/UJF, Grenoble (France); Berge, L.; Chapellier, M.; Dumoulin, L.; Giuliani, A.; Le-Sueur, H.; Marnieros, S.; Olivieri, E.; Poda, D. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Bluemer, J. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Broniatowski, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Eitel, K.; Kozlov, V.; Siebenborn, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Foerster, N.; Heuermann, G.; Scorza, S. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Jin, Y. [Laboratoire de Photonique et de Nanostructures, CNRS, Route de Nozay, Marcoussis (France); Kefelian, C. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Kleifges, M.; Tcherniakhovski, D.; Weber, M. [Karlsruher Institut fuer Technologie, Institut fuer Prozessdatenverarbeitung und Elektronik, Karlsruhe (Germany); Kraus, H. [University of Oxford, Department of Physics, Oxford (United Kingdom); Kudryavtsev, V.A. [University of Sheffield, Department of Physics and Astronomy, Sheffield (United Kingdom); Pari, P. [CEA Saclay, DSM/IRAMIS, Gif-sur-Yvette (France); Piro, M.C. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Rensselaer Polytechnic Institute, Troy, NY (United States); Rozov, S.; Yakushev, E. [JINR, Laboratory of Nuclear Problems, Dubna, Moscow Region (Russian Federation); Schmidt, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2016-10-15

    We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range m{sub χ} element of [4, 30] GeV/c{sup 2} with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from γ- and β-radiation, recoils from {sup 206}Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on boosted decision trees (BDT) [1]. For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spin-independent WIMP-nucleon scattering cross section can be set with a hypothesis test based on the profile likelihood test statistics. The 90 % C.L. exclusion limit set for WIMPs with m{sub χ} = 4 GeV/c{sup 2} is 1.6 x 10{sup -39} cm{sup 2}, which is an improvement of a factor of seven with respect to the BDT-based analysis. For WIMP masses above 15 GeV/c{sup 2} the exclusion limits found with both analyses are in good agreement. (orig.)

  4. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  5. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  6. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)

    2017-02-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)

  7. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others

    2016-10-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.

  8. Likelihood analysis of parity violation in the compound nucleus

    International Nuclear Information System (INIS)

    Bowman, D.; Sharapov, E.

    1993-01-01

    We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function

  9. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  10. Constraint likelihood analysis for a network of gravitational wave detectors

    International Nuclear Information System (INIS)

    Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.

    2005-01-01

    We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method

  11. Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2015-01-01

    We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to

  12. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  13. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  14. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  15. On the likelihood of detecting gravitational waves from Population III compact object binaries

    Science.gov (United States)

    Belczynski, Krzysztof; Ryu, Taeho; Perna, Rosalba; Berti, Emanuele; Tanaka, Takamitsu L.; Bulik, Tomasz

    2017-11-01

    We study the contribution of binary black hole (BH-BH) mergers from the first, metal-free stars in the Universe (Pop III) to gravitational wave detection rates. Our study combines initial conditions for the formation of Pop III stars based on N-body simulations of binary formation (including rates, binary fraction, initial mass function, orbital separation and eccentricity distributions) with an updated model of stellar evolution specific for Pop III stars. We find that the merger rate of these Pop III BH-BH systems is relatively small (≲ 0.1 Gpc-3 yr-1) at low redshifts (z 1 per cent) contribution of these stars to low-redshift BH-BH mergers. However, it remains to be tested whether (and at what level) rapidly spinning Pop III stars in the homogeneous evolution scenario can contribute to BH-BH mergers in the local Universe.

  16. Affective mapping: An activation likelihood estimation (ALE) meta-analysis.

    Science.gov (United States)

    Kirby, Lauren A J; Robinson, Jennifer L

    2017-11-01

    Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  18. Maximal information analysis: I - various Wayne State plots and the most common likelihood principle

    International Nuclear Information System (INIS)

    Bonvicini, G.

    2005-01-01

    Statistical analysis using all moments of the likelihood L(y vertical bar α) (y being the data and α being the fit parameters) is presented. The relevant plots for various data fitting situations are presented. The goodness of fit (GOF) parameter (currently the χ 2 ) is redefined as the isoprobability level in a multidimensional space. Many useful properties of statistical analysis are summarized in a new statistical principle which states that the most common likelihood, and not the tallest, is the best possible likelihood, when comparing experiments or hypotheses

  19. Likelihood functions for the analysis of single-molecule binned photon sequences

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V., E-mail: irinag@niddk.nih.gov [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD 20892 (United States)

    2012-03-02

    Graphical abstract: Folding of a protein with attached fluorescent dyes, the underlying conformational trajectory of interest, and the observed binned photon trajectory. Highlights: Black-Right-Pointing-Pointer A sequence of photon counts can be analyzed using a likelihood function. Black-Right-Pointing-Pointer The exact likelihood function for a two-state kinetic model is provided. Black-Right-Pointing-Pointer Several approximations are considered for an arbitrary kinetic model. Black-Right-Pointing-Pointer Improved likelihood functions are obtained to treat sequences of FRET efficiencies. - Abstract: We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained.

  20. LikelihoodLib - Fitting, Function Maximization, and Numerical Analysis

    CERN Document Server

    Smirnov, I B

    2001-01-01

    A new class library is designed for function maximization, minimization, solution of equations and for other problems related to mathematical analysis of multi-parameter functions by numerical iterative methods. When we search the maximum or another special point of a function, we may change and fit all parameters simultaneously, sequentially, recursively, or by any combination of these methods. The discussion is focused on the first the most complicated method, although the others are also supported by the library. For this method we apply: control of precision by interval computations; the calculation of derivatives either by differential arithmetic, or by the method of finite differences with the step lengths which provide suppression of the influence of numerical noise; possible synchronization of the subjective function calls with minimization of the number of iterations; competitive application of various methods for step calculation, and converging to the solution by many trajectories.

  1. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  2. Steady state likelihood ratio sensitivity analysis for stiff kinetic Monte Carlo simulations.

    Science.gov (United States)

    Núñez, M; Vlachos, D G

    2015-01-28

    Kinetic Monte Carlo simulation is an integral tool in the study of complex physical phenomena present in applications ranging from heterogeneous catalysis to biological systems to crystal growth and atmospheric sciences. Sensitivity analysis is useful for identifying important parameters and rate-determining steps, but the finite-difference application of sensitivity analysis is computationally demanding. Techniques based on the likelihood ratio method reduce the computational cost of sensitivity analysis by obtaining all gradient information in a single run. However, we show that disparity in time scales of microscopic events, which is ubiquitous in real systems, introduces drastic statistical noise into derivative estimates for parameters affecting the fast events. In this work, the steady-state likelihood ratio sensitivity analysis is extended to singularly perturbed systems by invoking partial equilibration for fast reactions, that is, by working on the fast and slow manifolds of the chemistry. Derivatives on each time scale are computed independently and combined to the desired sensitivity coefficients to considerably reduce the noise in derivative estimates for stiff systems. The approach is demonstrated in an analytically solvable linear system.

  3. Neandertal admixture in Eurasia confirmed by maximum-likelihood analysis of three genomes.

    Science.gov (United States)

    Lohse, Konrad; Frantz, Laurent A F

    2014-04-01

    Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4-7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination.

  4. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Waligorska, Marta [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland); Molski, Andrzej, E-mail: amolski@amu.edu.pl [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland)

    2012-07-25

    Highlights: Black-Right-Pointing-Pointer We study model selection and parameter recovery from single-molecule FRET experiments. Black-Right-Pointing-Pointer We examine the maximum likelihood-based analysis of two-color photon trajectories. Black-Right-Pointing-Pointer The number of observed photons determines the performance of the method. Black-Right-Pointing-Pointer For long trajectories, one can extract mean dwell times that are comparable to inter-photon times. -- Abstract: When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Foerster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  5. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    Science.gov (United States)

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  7. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  8. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  9. Analysis of the maximum likelihood channel estimator for OFDM systems in the presence of unknown interference

    Science.gov (United States)

    Dermoune, Azzouz; Simon, Eric Pierre

    2017-12-01

    This paper is a theoretical analysis of the maximum likelihood (ML) channel estimator for orthogonal frequency-division multiplexing (OFDM) systems in the presence of unknown interference. The following theoretical results are presented. Firstly, the uniqueness of the ML solution for practical applications, i.e., when thermal noise is present, is analytically demonstrated when the number of transmitted OFDM symbols is strictly greater than one. The ML solution is then derived from the iterative conditional ML (CML) algorithm. Secondly, it is shown that the channel estimate can be described as an algebraic function whose inputs are the initial value and the means and variances of the received samples. Thirdly, it is theoretically demonstrated that the channel estimator is not biased. The second and the third results are obtained by employing oblique projection theory. Furthermore, these results are confirmed by numerical results.

  10. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    Science.gov (United States)

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  11. Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.

    NARCIS (Netherlands)

    Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.

    1996-01-01

    Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate

  12. Pain anticipation: an activation likelihood estimation meta-analysis of brain imaging studies.

    Science.gov (United States)

    Palermo, Sara; Benedetti, Fabrizio; Costa, Tommaso; Amanzio, Martina

    2015-05-01

    The anticipation of pain has been investigated in a variety of brain imaging studies. Importantly, today there is no clear overall picture of the areas that are involved in different studies and the exact role of these regions in pain expectation remains especially unexploited. To address this issue, we used activation likelihood estimation meta-analysis to analyze pain anticipation in several neuroimaging studies. A total of 19 functional magnetic resonance imaging were included in the analysis to search for the cortical areas involved in pain anticipation in human experimental models. During anticipation, activated foci were found in the dorsolateral prefrontal, midcingulate and anterior insula cortices, medial and inferior frontal gyri, inferior parietal lobule, middle and superior temporal gyrus, thalamus, and caudate. Deactivated foci were found in the anterior cingulate, superior frontal gyrus, parahippocampal gyrus and in the claustrum. The results of the meta-analytic connectivity analysis provide an overall view of the brain responses triggered by the anticipation of a noxious stimulus. Such a highly distributed perceptual set of self-regulation may prime brain regions to process information where emotion, action and perception as well as their related subcategories play a central role. Not only do these findings provide important information on the neural events when anticipating pain, but also they may give a perspective into nocebo responses, whereby negative expectations may lead to pain worsening. © 2014 Wiley Periodicals, Inc.

  13. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    Vanhaelewyn, G.; Callens, F.; Gruen, R.

    2000-01-01

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO - 2 , an axial component CO 3- 3 , as well as four isotropic components, three of which could be attributed to SO - 2 , a tumbling CO - 2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, D E . The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  14. Hypnosis and pain perception: An Activation Likelihood Estimation (ALE) meta-analysis of functional neuroimaging studies.

    Science.gov (United States)

    Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo

    2015-12-01

    Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  16. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    Science.gov (United States)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  17. The likelihood ratio as a random variable for linked markers in kinship analysis.

    Science.gov (United States)

    Egeland, Thore; Slooten, Klaas

    2016-11-01

    The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.

  18. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    Science.gov (United States)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  19. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    Science.gov (United States)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  20. Extended maximum likelihood analysis of apparent flattenings of S0 and spiral galaxies

    International Nuclear Information System (INIS)

    Okamura, Sadanori; Takase, Bunshiro; Hamabe, Masaru; Nakada, Yoshikazu; Kodaira, Keiichi.

    1981-01-01

    Apparent flattenings of S0 and spiral galaxies compiled by Sandage et al. (1970) and van den Bergh (1977), and those listed in the Second Reference Catalogue (RC2) are analyzed by means of the extended maximum likelihood method which was recently developed in the information theory for statistical model identification. Emphasis is put on the possible difference in the distribution of intrinsic flattenings between S0's and spirals as a group, and on the apparent disagreements present in the previous results. The present analysis shows that (1) One cannot conclude on the basis of the data in the Reference Catalogue of Bright Galaxies (RCBG) that the distribution of intrinsic flattenings of spirals is almost identical to that of S0's; spirals have wider dispersion than S0's, and there are more round systems in spirals than in S0's. (2) The distribution of intrinsic flattenings of S0's and spirals derived from the data in RC2 again indicates a significant difference from each other. (3) The distribution of intrinsic flattenings of S0's exhibits different characteristics depending upon the surface-brightness level; the distribution with one component is obtained from the data at RCBG level (--23.5 mag arcsec -2 ) and that with two components at RC2 level (25 mag arcsec -2 ). (author)

  1. Speech perception in autism spectrum disorder: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Tryfon, Ana; Foster, Nicholas E V; Sharda, Megha; Hyde, Krista L

    2018-02-15

    Autism spectrum disorder (ASD) is often characterized by atypical language profiles and auditory and speech processing. These can contribute to aberrant language and social communication skills in ASD. The study of the neural basis of speech perception in ASD can serve as a potential neurobiological marker of ASD early on, but mixed results across studies renders it difficult to find a reliable neural characterization of speech processing in ASD. To this aim, the present study examined the functional neural basis of speech perception in ASD versus typical development (TD) using an activation likelihood estimation (ALE) meta-analysis of 18 qualifying studies. The present study included separate analyses for TD and ASD, which allowed us to examine patterns of within-group brain activation as well as both common and distinct patterns of brain activation across the ASD and TD groups. Overall, ASD and TD showed mostly common brain activation of speech processing in bilateral superior temporal gyrus (STG) and left inferior frontal gyrus (IFG). However, the results revealed trends for some distinct activation in the TD group showing additional activation in higher-order brain areas including left superior frontal gyrus (SFG), left medial frontal gyrus (MFG), and right IFG. These results provide a more reliable neural characterization of speech processing in ASD relative to previous single neuroimaging studies and motivate future work to investigate how these brain signatures relate to behavioral measures of speech processing in ASD. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Analysis of Pairwise Interactions in a Maximum Likelihood Sense to Identify Leaders in a Group

    Directory of Open Access Journals (Sweden)

    Violet Mwaffo

    2017-07-01

    Full Text Available Collective motion in animal groups manifests itself in the form of highly coordinated maneuvers determined by local interactions among individuals. A particularly critical question in understanding the mechanisms behind such interactions is to detect and classify leader–follower relationships within the group. In the technical literature of coupled dynamical systems, several methods have been proposed to reconstruct interaction networks, including linear correlation analysis, transfer entropy, and event synchronization. While these analyses have been helpful in reconstructing network models from neuroscience to public health, rules on the most appropriate method to use for a specific dataset are lacking. Here, we demonstrate the possibility of detecting leaders in a group from raw positional data in a model-free approach that combines multiple methods in a maximum likelihood sense. We test our framework on synthetic data of groups of self-propelled Vicsek particles, where a single agent acts as a leader and both the size of the interaction region and the level of inherent noise are systematically varied. To assess the feasibility of detecting leaders in real-world applications, we study a synthetic dataset of fish shoaling, generated by using a recent data-driven model for social behavior, and an experimental dataset of pharmacologically treated zebrafish. Not only does our approach offer a robust strategy to detect leaders in synthetic data but it also allows for exploring the role of psychoactive compounds on leader–follower relationships.

  3. Statistical analysis of COMPTEL maximum likelihood-ratio distributions: evidence for a signal from previously undetected AGN

    International Nuclear Information System (INIS)

    Williams, O. R.; Bennett, K.; Much, R.; Schoenfelder, V.; Blom, J. J.; Ryan, J.

    1997-01-01

    The maximum likelihood-ratio method is frequently used in COMPTEL analysis to determine the significance of a point source at a given location. In this paper we do not consider whether the likelihood-ratio at a particular location indicates a detection, but rather whether distributions of likelihood-ratios derived from many locations depart from that expected for source free data. We have constructed distributions of likelihood-ratios by reading values from standard COMPTEL maximum-likelihood ratio maps at positions corresponding to the locations of different categories of AGN. Distributions derived from the locations of Seyfert galaxies are indistinguishable, according to a Kolmogorov-Smirnov test, from those obtained from ''random'' locations, but differ slightly from those obtained from the locations of flat spectrum radio loud quasars, OVVs, and BL Lac objects. This difference is not due to known COMPTEL sources, since regions near these sources are excluded from the analysis. We suggest that it might arise from a number of sources with fluxes below the COMPTEL detection threshold

  4. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Thomas P. DeRamus

    2015-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE analysis of 21 voxel-based morphometry (VBM studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals. Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior connections. Our findings of decreased cortical matter in parietal–temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations.

  5. Can Asperger syndrome be distinguished from autism? An anatomic likelihood meta-analysis of MRI studies.

    Science.gov (United States)

    Yu, Kevin K; Cheung, Charlton; Chua, Siew E; McAlonan, Gráinne M

    2011-11-01

    The question of whether Asperger syndrome can be distinguished from autism has attracted much debate and may even incur delay in diagnosis and intervention. Accordingly, there has been a proposal for Asperger syndrome to be subsumed under autism in the forthcoming Diagnostic and Statistical Manual of Mental Disorders, fifth edition, in 2013. One approach to resolve this question has been to adopt the criterion of absence of clinically significant language or cognitive delay--essentially, the "absence of language delay." To our knowledge, this is the first meta-analysis of magnetic resonance imaging (MRI) studies of people with autism to compare absence with presence of language delay. It capitalizes on the voxel-based morphometry (VBM) approach to systematically explore the whole brain for anatomic correlates of delay and no delay in language acquisition in people with autism spectrum disorders. We conducted a systematic search for VBM MRI studies of grey matter volume in people with autism. Studies with a majority (at least 70%) of participants with autism diagnoses and a history of language delay were assigned to the autism group (n = 151, control n = 190). Those with a majority (at least 70%) of individuals with autism diagnoses and no language delay were assigned to the Asperger syndrome group (n = 149, control n = 214). We entered study coordinates into anatomic likelihood estimation meta-analysis software with sampling size weighting to compare grey matter summary maps driven by Asperger syndrome or autism. The summary autism grey matter map showed lower volumes in the cerebellum, right uncus, dorsal hippocampus and middle temporal gyrus compared with controls; grey matter volumes were greater in the bilateral caudate, prefrontal lobe and ventral temporal lobe. The summary Asperger syndrome map indicated lower grey matter volumes in the bilateral amygdala/hippocampal gyrus and prefrontal lobe, left occipital gyrus, right cerebellum, putamen and precuneus

  6. Likelihood of Suicidality at Varying Levels of Depression Severity: A Re-Analysis of NESARC Data

    Science.gov (United States)

    Uebelacker, Lisa A.; Strong, David; Weinstock, Lauren M.; Miller, Ivan W.

    2010-01-01

    Although it is clear that increasing depression severity is associated with more risk for suicidality, less is known about at what levels of depression severity the risk for different suicide symptoms increases. We used item response theory to estimate the likelihood of endorsing suicide symptoms across levels of depression severity in an…

  7. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  8. Neural Networks Involved in Adolescent Reward Processing: An Activation Likelihood Estimation Meta-Analysis of Functional Neuroimaging Studies

    Science.gov (United States)

    Silverman, Merav H.; Jedd, Kelly; Luciana, Monica

    2015-01-01

    Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587

  9. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    OpenAIRE

    Aslıhan Kıymalıoğlu

    2014-01-01

    Elaboration Likelihood Model (ELM), which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept...

  10. Biomechanical analysis of effects of neuromusculoskeletal training for older adults on the likelihood of slip-induced falls.

    OpenAIRE

    Kim, Sukwon

    2006-01-01

    Overview of the Study Title Biomechanical Analysis for Effects of Neuromusculoskeletal Training for Older Adults on Outcomes of Slip-induced Falls. Research Objectives The objective of this study was to evaluate if neuromusculoskeletal training (i.e., weight and balance training) for older adults could reduce the likelihood of slip-induced fall accidents. The study focused on evaluating biomechanics among the elderly at pre- and post-training stages during processes associated w...

  11. A longitudinal analysis of the impact of hospital service line profitability on the likelihood of readmission.

    Science.gov (United States)

    Navathe, Amol S; Volpp, Kevin G; Konetzka, R Tamara; Press, Matthew J; Zhu, Jingsan; Chen, Wei; Lindrooth, Richard C

    2012-08-01

    Quality of care may be linked to the profitability of admissions in addition to level of reimbursement. Prior policy reforms reduced payments that differentially affected the average profitability of various admission types. The authors estimated a Cox competing risks model, controlling for the simultaneous risk of mortality post discharge, to determine whether the average profitability of hospital service lines to which a patient was admitted was associated with the likelihood of readmission within 30 days. The sample included 12,705,933 Medicare Fee for Service discharges from 2,438 general acute care hospitals during 1997, 2001, and 2005. There was no evidence of an association between changes in average service line profitability and changes in readmission risk, even when controlling for risk of mortality. These findings are reassuring in that the profitability of patients' admissions did not affect readmission rates, and together with other evidence may suggest that readmissions are not an unambiguous quality indicator for in-hospital care.

  12. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    Directory of Open Access Journals (Sweden)

    Aslıhan Kıymalıoğlu

    2014-12-01

    Full Text Available Elaboration Likelihood Model (ELM, which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept most frequently used in elaboration process was involvement, and that argument quality and endorser credibility were the factors most often employed in measuring their effect on the dependant variables. The review provides valuable insights as it presents a holistic view of the model and the variables used in the model.

  13. An Activation Likelihood Estimation Meta-Analysis Study of Simple Motor Movements in Older and Young Adults

    Science.gov (United States)

    Turesky, Ted K.; Turkeltaub, Peter E.; Eden, Guinevere F.

    2016-01-01

    The functional neuroanatomy of finger movements has been characterized with neuroimaging in young adults. However, less is known about the aging motor system. Several studies have contrasted movement-related activity in older versus young adults, but there is inconsistency among their findings. To address this, we conducted an activation likelihood estimation (ALE) meta-analysis on within-group data from older adults and young adults performing regularly paced right-hand finger movement tasks in response to external stimuli. We hypothesized that older adults would show a greater likelihood of activation in right cortical motor areas (i.e., ipsilateral to the side of movement) compared to young adults. ALE maps were examined for conjunction and between-group differences. Older adults showed overlapping likelihoods of activation with young adults in left primary sensorimotor cortex (SM1), bilateral supplementary motor area, bilateral insula, left thalamus, and right anterior cerebellum. Their ALE map differed from that of the young adults in right SM1 (extending into dorsal premotor cortex), right supramarginal gyrus, medial premotor cortex, and right posterior cerebellum. The finding that older adults uniquely use ipsilateral regions for right-hand finger movements and show age-dependent modulations in regions recruited by both age groups provides a foundation by which to understand age-related motor decline and motor disorders. PMID:27799910

  14. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    Science.gov (United States)

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  15. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    Directory of Open Access Journals (Sweden)

    Wang Huai-Chun

    2009-09-01

    Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.

  16. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    Science.gov (United States)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  17. Detection of COL III in Parchment by Amino Acid Analysis

    DEFF Research Database (Denmark)

    Vestergaard Poulsen Sommer, Dorte; Larsen, René

    2016-01-01

    Cultural heritage parchments made from the reticular dermis of animals have been subject to studies of deterioration and conservation by amino acid analysis. The reticular dermis contains a varying mixture of collagen I and III (COL I and III). When dealing with the results of the amino acid...... analyses, till now the COL III content has not been taken into account. Based on the available amino acid sequences we present a method for determining the amount of COL III in the reticular dermis of new and historical parchments calculated from the ratio of Ile/Val. We find COL III contents between 7...... and 32 % in new parchments and between 0.2 and 40 % in the historical parchments. This is consistent with results in the literature. The varying content of COL III has a significant influence on the uncertainty of the amino acid analysis. Although we have not found a simple correlation between the COL...

  18. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    Directory of Open Access Journals (Sweden)

    Leandro de Jesus Benevides

    Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.

  19. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Hoffman, E.J.; Nunez, J.; Coakley, K.J.

    1993-01-01

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome

  20. Use of COMCAN III in system design and reliability analysis

    International Nuclear Information System (INIS)

    Rasmuson, D.M.; Shepherd, J.C.; Marshall, N.H.; Fitch, L.R.

    1982-03-01

    This manual describes the COMCAN III computer program and its use. COMCAN III is a tool that can be used by the reliability analyst performing a probabilistic risk assessment or by the designer of a system desiring improved performance and efficiency. COMCAN III can be used to determine minimal cut sets of a fault tree, to calculate system reliability characteristics, and to perform qualitative common cause failure analysis

  1. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    Science.gov (United States)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  2. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  3. [DIN-compatible vision assessment of increased reproducibility using staircase measurement and maximum likelihood analysis].

    Science.gov (United States)

    Weigmann, U; Petersen, J

    1996-08-01

    Visual acuity determination according to DIN 58,220 does not make full use of the information received about the patient, in contrast to the staircase method. Thus, testing the same number of optotypes, the staircase method should yield more reproducible acuity results. On the other hand, the staircase method gives systematically higher acuity values because it converges on the 48% point of the psychometric function (for Landolt rings in eight positions) and not on the 65% probability, as DIN 58,220 with criterion 3/5 does. This bias can be avoided by means of a modified evaluation. Using the staircase data we performed a maximum likelihood estimate of the psychometric function as a whole and computed the acuity value for 65% probability of correct answers. We determined monocular visual acuity in 102 persons with widely differing visual performance. Each subject underwent four tests in random order, two according to DIN 58,220 and two using the modified staircase method (Landolt rings in eight positions scaled by a factor 1.26; PC monitor with 1024 x 768 pixels; distance 4.5 m). Each test was performed with 25 optotypes. The two procedures provide the same mean visual acuity values (difference less than 0.02 acuity steps). The test-retest results match in 30.4% of DIN repetitions but in 50% of the staircases. The standard deviation of the test-retest difference is 1.41 (DIN) and 1.06 (modified staircase) acuity steps. Thus the standard deviation of the single test is 1.0 (DIN) and 0.75 (modified staircase) acuity steps. The new method provides visual acuity values identical to DIN 58,220 but is superior with respect to reproducibility.

  4. An empirical likelihood ratio test robust to individual heterogeneity for differential expression analysis of RNA-seq.

    Science.gov (United States)

    Xu, Maoqi; Chen, Liang

    2018-01-01

    The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Quasi-likelihood generalized linear regression analysis of fatality risk data.

    Science.gov (United States)

    2009-01-01

    Transportation-related fatality risks is a function of many interacting human, vehicle, and environmental factors. Statistically valid analysis of such data is challenged both by the complexity of plausible structural models relating fatality rates t...

  6. Altered sensorimotor activation patterns in idiopathic dystonia-an activation likelihood estimation meta-analysis of functional brain imaging studies

    DEFF Research Database (Denmark)

    Løkkegaard, Annemette; Herz, Damian M; Haagensen, Brian Numelin

    2016-01-01

    Dystonia is characterized by sustained or intermittent muscle contractions causing abnormal, often repetitive, movements or postures. Functional neuroimaging studies have yielded abnormal task-related sensorimotor activation in dystonia, but the results appear to be rather variable across studies....... Further, study size was usually small including different types of dystonia. Here we performed an activation likelihood estimation (ALE) meta-analysis of functional neuroimaging studies in patients with primary dystonia to test for convergence of dystonia-related alterations in task-related activity...... postcentral gyrus, right superior temporal gyrus and dorsal midbrain. Apart from the midbrain cluster, all between-group differences in task-related activity were retrieved in a sub-analysis including only the 14 studies on patients with focal dystonia. For focal dystonia, an additional cluster of increased...

  7. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  8. Likelihood analysis of the chalcone synthase genes suggests the role of positive selection in morning glories (Ipomoea).

    Science.gov (United States)

    Yang, Ji; Gu, Hongya; Yang, Ziheng

    2004-01-01

    Chalcone synthase (CHS) is a key enzyme in the biosynthesis of flavonoides, which are important for the pigmentation of flowers and act as attractants to pollinators. Genes encoding CHS constitute a multigene family in which the copy number varies among plant species and functional divergence appears to have occurred repeatedly. In morning glories (Ipomoea), five functional CHS genes (A-E) have been described. Phylogenetic analysis of the Ipomoea CHS gene family revealed that CHS A, B, and C experienced accelerated rates of amino acid substitution relative to CHS D and E. To examine whether the CHS genes of the morning glories underwent adaptive evolution, maximum-likelihood models of codon substitution were used to analyze the functional sequences in the Ipomoea CHS gene family. These models used the nonsynonymous/synonymous rate ratio (omega = d(N)/ d(S)) as an indicator of selective pressure and allowed the ratio to vary among lineages or sites. Likelihood ratio test suggested significant variation in selection pressure among amino acid sites, with a small proportion of them detected to be under positive selection along the branches ancestral to CHS A, B, and C. Positive Darwinian selection appears to have promoted the divergence of subfamily ABC and subfamily DE and is at least partially responsible for a rate increase following gene duplication.

  9. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis : A Comparison of Maximum Likelihood and Bayesian Estimation

    NARCIS (Netherlands)

    Can, Seda; van de Schoot, Rens|info:eu-repo/dai/nl/304833207; Hox, Joop|info:eu-repo/dai/nl/073351431

    2015-01-01

    Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the

  10. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at low...... to the initialization. Its asymptotic performance does not reach the DML performance though. The second strategy, called Pseudo-Quadratic ML (PQML), is naturally denoised. The denoising in PQML is furthermore more efficient than in DIQML: PQML yields the same asymptotic performance as DML, as opposed to DIQML......, but requires a consistent initialization. We furthermore compare DIQML and PQML to the strategy of alternating minimization w.r.t. symbols and channel for solving DML (AQML). An asymptotic performance analysis, a complexity evaluation and simulation results are also presented. The proposed DIQML and PQML...

  11. Maximum likelihood estimation of dose-response parameters for therapeutic operating characteristic (TOC) analysis of carcinoma of the nasopharynx

    International Nuclear Information System (INIS)

    Metz, C.E.; Tokars, R.P.; Kronman, H.B.; Griem, M.L.

    1982-01-01

    A Therapeutic Operating Characteristic (TOC) curve for radiation therapy plots, for all possible treatment doses, the probability of tumor ablation as a function of the probability of radiation-induced complication. Application of this analysis to actual therapeutic situation requires that dose-response curves for ablation and for complication be estimated from clinical data. We describe an approach in which ''maximum likelihood estimates'' of these dose-response curves are made, and we apply this approach to data collected on responses to radiotherapy for carcinoma of the nasopharynx. TOC curves constructed from the estimated dose-response curves are subject to moderately large uncertainties because of the limitations of available data.These TOC curves suggest, however, that treatment doses greater than 1800 rem may substantially increase the probability of tumor ablation with little increase in the risk of radiation-induced cervical myelopathy, especially for T1 and T2 tumors

  12. Further Evaluation of Covariate Analysis using Empirical Bayes Estimates in Population Pharmacokinetics: the Perception of Shrinkage and Likelihood Ratio Test.

    Science.gov (United States)

    Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose

    2017-01-01

    Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.

  13. Social Analysis Systems (SAS2) - Phase III

    International Development Research Centre (IDRC) Digital Library (Canada)

    Scaling Up the International Impact of Action Research : Social Analysis ... up the international impact of action research : SAS phase 3; final technical report ... 000 Canadians abroad to work at the local level on various development issues.

  14. Problems in mathematical analysis III integration

    CERN Document Server

    Kaczor, W J

    2003-01-01

    We learn by doing. We learn mathematics by doing problems. This is the third volume of Problems in Mathematical Analysis. The topic here is integration for real functions of one real variable. The first chapter is devoted to the Riemann and the Riemann-Stieltjes integrals. Chapter 2 deals with Lebesgue measure and integration. The authors include some famous, and some not so famous, integral inequalities related to Riemann integration. Many of the problems for Lebesgue integration concern convergence theorems and the interchange of limits and integrals. The book closes with a section on Fourier series, with a concentration on Fourier coefficients of functions from particular classes and on basic theorems for convergence of Fourier series. The book is primarily geared toward students in analysis, as a study aid, for problem-solving seminars, or for tutorials. It is also an excellent resource for instructors who wish to incorporate problems into their lectures. Solutions for the problems are provided in the boo...

  15. Demonstration sensitivity analysis for RADTRAN III

    International Nuclear Information System (INIS)

    Neuhauser, K.S.; Reardon, P.C.

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves

  16. Effects of stimulus type and strategy on mental rotation network:an Activation Likelihood Estimation meta-analysis

    Directory of Open Access Journals (Sweden)

    Barbara eTomasino

    2016-01-01

    Full Text Available We could predict how an object would look like if we were to see it from different viewpoints. The brain network governing mental rotation (MR has been studied using a variety of stimuli and tasks instructions. By using activation likelihood estimation (ALE meta-analysis we tested whether different MR networks can be modulated by the type of stimulus (body vs. non body parts or by the type of tasks instructions (motor imagery-based vs. non-motor imagery-based MR instructions. Testing for the bodily and non-bodily stimulus axis revealed a bilateral sensorimotor activation for bodily-related as compared to non bodily-related stimuli and a posterior right lateralized activation for non bodily-related as compared to bodily-related stimuli. A top-down modulation of the network was exerted by the MR tasks instructions frame with a bilateral (preferentially sensorimotor left network for motor imagery- vs. non-motor imagery-based MR instructions and the latter activating a preferentially posterior right occipito-temporal-parietal network. The present quantitative meta-analysis summarizes and amends previous descriptions of the brain network related to MR and shows how it is modulated by top-down and bottom-up experimental factors.

  17. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    Science.gov (United States)

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  18. Event-related fMRI studies of false memory: An Activation Likelihood Estimation meta-analysis.

    Science.gov (United States)

    Kurkela, Kyle A; Dennis, Nancy A

    2016-01-29

    Over the last two decades, a wealth of research in the domain of episodic memory has focused on understanding the neural correlates mediating false memories, or memories for events that never happened. While several recent qualitative reviews have attempted to synthesize this literature, methodological differences amongst the empirical studies and a focus on only a sub-set of the findings has limited broader conclusions regarding the neural mechanisms underlying false memories. The current study performed a voxel-wise quantitative meta-analysis using activation likelihood estimation to investigate commonalities within the functional magnetic resonance imaging (fMRI) literature studying false memory. The results were broken down by memory phase (encoding, retrieval), as well as sub-analyses looking at differences in baseline (hit, correct rejection), memoranda (verbal, semantic), and experimental paradigm (e.g., semantic relatedness and perceptual relatedness) within retrieval. Concordance maps identified significant overlap across studies for each analysis. Several regions were identified in the general false retrieval analysis as well as multiple sub-analyses, indicating their ubiquitous, yet critical role in false retrieval (medial superior frontal gyrus, left precentral gyrus, left inferior parietal cortex). Additionally, several regions showed baseline- and paradigm-specific effects (hit/perceptual relatedness: inferior and middle occipital gyrus; CRs: bilateral inferior parietal cortex, precuneus, left caudate). With respect to encoding, analyses showed common activity in the left middle temporal gyrus and anterior cingulate cortex. No analysis identified a common cluster of activation in the medial temporal lobe. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Sensitivity, specificity and likelihood ratios of PCR in the diagnosis of syphilis: a systematic review and meta-analysis.

    Science.gov (United States)

    Gayet-Ageron, Angèle; Lautenschlager, Stephan; Ninet, Béatrice; Perneger, Thomas V; Combescure, Christophe

    2013-05-01

    To systematically review and estimate pooled sensitivity and specificity of the polymerase chain reaction (PCR) technique compared to recommended reference tests in the diagnosis of suspected syphilis at various stages and in various biological materials. Systematic review and meta-analysis. Search of three electronic bibliographic databases from January 1990 to January 2012 and the abstract books of five congresses specialized in the infectious diseases' field (1999-2011). Search key terms included syphilis, Treponema pallidum or neurosyphilis and molecular amplification, polymerase chain reaction or PCR. We included studies that used both reference tests to diagnose syphilis plus PCR and we presented pooled estimates of PCR sensitivity, specificity, and positive and negative likelihood ratios (LR) per syphilis stages and biological materials. Of 1160 identified abstracts, 69 were selected and 46 studies used adequate reference tests to diagnose syphilis. Sensitivity was highest in the swabs from primary genital or anal chancres (78.4%; 95% CI: 68.2-86.0) and in blood from neonates with congenital syphilis (83.0%; 55.0-95.2). Most pooled specificities were ∼95%, except those in blood. A positive PCR is highly informative with a positive LR around 20 in ulcers or skin lesions. In the blood, the positive LR was syphilis diagnosis in lesions. PCR is a useful diagnostic tool in ulcers, especially when serology is still negative and in medical settings with a high prevalence of syphilis.

  20. Is there a critical lesion site for unilateral spatial neglect? A meta-analysis using activation likelihood estimation.

    Directory of Open Access Journals (Sweden)

    Pascal eMolenberghs

    2012-04-01

    Full Text Available The critical lesion site responsible for the syndrome of unilateral spatial neglect has been debated for more than a decade. Here we performed an activation likelihood estimation (ALE to provide for the first time an objective quantitative index of the consistency of lesion sites across anatomical group studies of spatial neglect. The analysis revealed several distinct regions in which damage has consistently been associated with spatial neglect symptoms. Lesioned clusters were located in several cortical and subcortical regions of the right hemisphere, including the middle and superior temporal gyrus, inferior parietal lobule, intraparietal sulcus, precuneus, middle occipital gyrus, caudate nucleus and posterior insula, as well as in the white matter pathway corresponding to the posterior part of the superior longitudinal fasciculus. Further analyses suggested that separate lesion sites are associated with impairments in different behavioural tests, such as line bisection and target cancellation. Similarly, specific subcomponents of the heterogeneous neglect syndrome, such as extinction and allocentric and personal neglect, are associated with distinct lesion sites. Future progress in delineating the neuropathological correlates of spatial neglect will depend upon the development of more refined measures of perceptual and cognitive functions than those currently available in the clinical setting.

  1. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  2. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  3. Mapping grey matter reductions in schizophrenia: an anatomical likelihood estimation analysis of voxel-based morphometry studies.

    Science.gov (United States)

    Fornito, A; Yücel, M; Patti, J; Wood, S J; Pantelis, C

    2009-03-01

    Voxel-based morphometry (VBM) is a popular tool for mapping neuroanatomical changes in schizophrenia patients. Several recent meta-analyses have identified the brain regions in which patients most consistently show grey matter reductions, although they have not examined whether such changes reflect differences in grey matter concentration (GMC) or grey matter volume (GMV). These measures assess different aspects of grey matter integrity, and may therefore reflect different pathological processes. In this study, we used the Anatomical Likelihood Estimation procedure to analyse significant differences reported in 37 VBM studies of schizophrenia patients, incorporating data from 1646 patients and 1690 controls, and compared the findings of studies using either GMC or GMV to index grey matter differences. Analysis of all studies combined indicated that grey matter reductions in a network of frontal, temporal, thalamic and striatal regions are among the most frequently reported in literature. GMC reductions were generally larger and more consistent than GMV reductions, and were more frequent in the insula, medial prefrontal, medial temporal and striatal regions. GMV reductions were more frequent in dorso-medial frontal cortex, and lateral and orbital frontal areas. These findings support the primacy of frontal, limbic, and subcortical dysfunction in the pathophysiology of schizophrenia, and suggest that the grey matter changes observed with MRI may not necessarily result from a unitary pathological process.

  4. Statistical analysis of the BOIL program in RSYST-III

    International Nuclear Information System (INIS)

    Beck, W.; Hausch, H.J.

    1978-11-01

    The paper describes a statistical analysis in the RSYST-III program system. Using the example of the BOIL program, it is shown how the effects of inaccurate input data on the output data can be discovered. The existing possibilities of data generation, data handling, and data evaluation are outlined. (orig.) [de

  5. Rethinking ASME III seismic analysis for piping operability evaluations

    International Nuclear Information System (INIS)

    Adams, T.M.; Stevenson, J.D.

    1994-01-01

    It has been recognized since the mid 1980's that there are very large seismic margins to failure for nuclear piping systems when designed using current industry practice, design criteria, and methods. As a result of this realization there are or have been approximately eighteen initiatives within the ASME , Boiler and Pressure Vessel Code Section III, Division 1, in the form of proposed code cases and proposed code text changes designed to reduce these failure margins to more realistic values. For the most part these initiatives have concentrated on reclassifying seismic inertia stresses in the piping as secondary and increasing the allowable stress limits permitted by Section III of the ASME, Boiler Code. This paper focuses on the application of non-linear spectral analysis methods as a method to reduce the input seismic demand determination and thereby reduce the seismic failure margins. The approach is evaluated using the ASME Boiler Pressure Vessel Code Section III Subgroup on Design benchmark procedure as proposed by the Subgroup's Special Task Group on Integrated Piping Criteria. Using this procedure, criteria are compared to current code criterion and analysis methods, and several other of the currently proposed Boiler and Pressure Vessel, Section III, changes. Finally, the applicability of the non-linear spectral analysis to continued Safe Operation Evaluations is reviewed and discussed

  6. An Elaboration Likelihood Model Based Longitudinal Analysis of Attitude Change during the Process of IT Acceptance via Education Program

    Science.gov (United States)

    Lee, Woong-Kyu

    2012-01-01

    The principal objective of this study was to gain insight into attitude changes occurring during IT acceptance from the perspective of elaboration likelihood model (ELM). In particular, the primary target of this study was the process of IT acceptance through an education program. Although the Internet and computers are now quite ubiquitous, and…

  7. Parametric Sensitivity Analysis of the WAVEWATCH III Model

    Directory of Open Access Journals (Sweden)

    Beng-Chun Lee

    2009-01-01

    Full Text Available The parameters in numerical wave models need to be calibrated be fore a model can be applied to a specific region. In this study, we selected the 8 most important parameters from the source term of the WAVEWATCH III model and subjected them to sensitivity analysis to evaluate the sensitivity of the WAVEWATCH III model to the selected parameters to determine how many of these parameters should be considered for further discussion, and to justify the significance priority of each parameter. After ranking each parameter by sensitivity and assessing their cumulative impact, we adopted the ARS method to search for the optimal values of those parameters to which the WAVEWATCH III model is most sensitive by comparing modeling results with ob served data at two data buoys off the coast of north eastern Taiwan; the goal being to find optimal parameter values for improved modeling of wave development. The procedure adopting optimal parameters in wave simulations did improve the accuracy of the WAVEWATCH III model in comparison to default runs based on field observations at two buoys.

  8. Likelihood analysis of the sub-GUT MSSM in light of LHC 13-TeV data

    Science.gov (United States)

    Costa, J. C.; Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Lucio, M.; Santos, D. Martínez; Olive, K. A.; Richards, A.; Weiglein, G.

    2018-02-01

    We describe a likelihood analysis using MasterCode of variants of the MSSM in which the soft supersymmetry-breaking parameters are assumed to have universal values at some scale M_in below the supersymmetric grand unification scale M_GUT, as can occur in mirage mediation and other models. In addition to M_in, such `sub-GUT' models have the 4 parameters of the CMSSM, namely a common gaugino mass m_{1/2}, a common soft supersymmetry-breaking scalar mass m_0, a common trilinear mixing parameter A and the ratio of MSSM Higgs vevs tan β , assuming that the Higgs mixing parameter μ > 0. We take into account constraints on strongly- and electroweakly-interacting sparticles from ˜ 36/fb of LHC data at 13 TeV and the LUX and 2017 PICO, XENON1T and PandaX-II searches for dark matter scattering, in addition to the previous LHC and dark matter constraints as well as full sets of flavour and electroweak constraints. We find a preference for M_in˜ 10^5 to 10^9 GeV, with M_in˜ M_GUT disfavoured by Δ χ ^2 ˜ 3 due to the BR(B_{s, d} → μ ^+μ ^-) constraint. The lower limits on strongly-interacting sparticles are largely determined by LHC searches, and similar to those in the CMSSM. We find a preference for the LSP to be a Bino or Higgsino with m_{\\tilde{χ }^01} ˜ 1 TeV, with annihilation via heavy Higgs bosons H / A and stop coannihilation, or chargino coannihilation, bringing the cold dark matter density into the cosmological range. We find that spin-independent dark matter scattering is likely to be within reach of the planned LUX-Zeplin and XENONnT experiments. We probe the impact of the (g-2)_μ constraint, finding similar results whether or not it is included.

  9. Likelihood analysis of the sub-GUT MSSM in light of LHC 13-TeV data

    Energy Technology Data Exchange (ETDEWEB)

    Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Sakurai, K. [University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Instituto Galego de Fisica de Altas Enerxias, Santiago de Compostela (Spain); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, School of Physics, ARC Centre of Excellence for Particle Physics at the Terascale, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); National Institute of Chemical Physics and Biophysics, Tallinn (Estonia); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM + CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Olive, K.A. [University of Minnesota, School of Physics and Astronomy, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2018-02-15

    We describe a likelihood analysis using MasterCode of variants of the MSSM in which the soft supersymmetry-breaking parameters are assumed to have universal values at some scale M{sub in} below the supersymmetric grand unification scale M{sub GUT}, as can occur in mirage mediation and other models. In addition to M{sub in}, such 'sub-GUT' models have the 4 parameters of the CMSSM, namely a common gaugino mass m{sub 1/2}, a common soft supersymmetry-breaking scalar mass m{sub 0}, a common trilinear mixing parameter A and the ratio of MSSM Higgs vevs tan β, assuming that the Higgs mixing parameter μ > 0. We take into account constraints on strongly- and electroweakly-interacting sparticles from ∝ 36/fb of LHC data at 13 TeV and the LUX and 2017 PICO, XENON1T and PandaX-II searches for dark matter scattering, in addition to the previous LHC and dark matter constraints as well as full sets of flavour and electroweak constraints. We find a preference for M{sub in} ∝ 10{sup 5} to 10{sup 9} GeV, with M{sub in} ∝ M{sub GUT} disfavoured by Δχ{sup 2} ∝ 3 due to the BR(B{sub s,d} → μ{sup +}μ{sup -}) constraint. The lower limits on strongly-interacting sparticles are largely determined by LHC searches, and similar to those in the CMSSM. We find a preference for the LSP to be a Bino or Higgsino with m{sub χ{sup 0}{sub 1}} ∝ 1 TeV, with annihilation via heavy Higgs bosons H/A and stop coannihilation, or chargino coannihilation, bringing the cold dark matter density into the cosmological range. We find that spin-independent dark matter scattering is likely to be within reach of the planned LUX-Zeplin and XENONnT experiments. We probe the impact of the (g-2){sub μ} constraint, finding similar results whether or not it is included. (orig.)

  10. An analysis on Public Service Announcements (PSA) within the scope of Elaboration Likelihood Model: Orange and Hazelnut Consumption Samples

    OpenAIRE

    Bical, Adil; Yılmaz, R. Ayhan

    2018-01-01

    The purpose of the study is to reveal that how persuasion works in public service announcements on hazelnut and orange consumption ones broadcasted in Turkey. According to Petty and Cacioppo, Elaboration Likelihood Model explains the process of persuasion on two routes: central and peripheral. In-depth interviews were conducted to obtain the goal of the study. Respondents were asked whether they process the message of the PSA centrally or peripherally. Advertisements on consumption of hazelnu...

  11. Analysis of hourly crash likelihood using unbalanced panel data mixed logit model and real-time driving environmental big data.

    Science.gov (United States)

    Chen, Feng; Chen, Suren; Ma, Xiaoxiang

    2018-06-01

    Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  12. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  13. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  14. Logic of likelihood

    International Nuclear Information System (INIS)

    Wall, M.J.W.

    1992-01-01

    The notion of open-quotes probabilityclose quotes is generalized to that of open-quotes likelihood,close quotes and a natural logical structure is shown to exist for any physical theory which predicts likelihoods. Two physically based axioms are given for this logical structure to form an orthomodular poset, with an order-determining set of states. The results strengthen the basis of the quantum logic approach to axiomatic quantum theory. 25 refs

  15. The phylogenetic likelihood library.

    Science.gov (United States)

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  16. Final Safety Analysis Report (FSAR) for Building 332, Increment III

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N.; Toy, Jr., A. J.

    1977-08-31

    This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.

  17. VIPR III VADR SPIDER Structural Design and Analysis

    Science.gov (United States)

    Li, Wesley; Chen, Tony

    2016-01-01

    In support of the National Aeronautics and Space Administration (NASA) Vehicle Integrated Propulsion Research (VIPR) Phase III team to evaluate the volcanic ash environment effects on the Pratt & Whitney F117-PW-100 turbofan engine, NASA Armstrong Flight Research Center has successfully performed structural design and analysis on the Volcanic Ash Distribution Rig (VADR) and the Structural Particulate Integration Device for Engine Research (SPIDER) for the ash ingestion test. Static and dynamic load analyses were performed to ensure no structural failure would occur during the test. Modal analysis was conducted, and the results were used to develop engine power setting avoidance zones. These engine power setting avoidance zones were defined to minimize the dwell time when the natural frequencies of the VADR/SPIDER system coincided with the excitation frequencies of the engine which was operating at various revolutions per minute. Vortex-induced vibration due to engine suction air flow during the ingestion test was also evaluated, but was not a concern.

  18. Should I Text or Call Here? A Situation-Based Analysis of Drivers' Perceived Likelihood of Engaging in Mobile Phone Multitasking.

    Science.gov (United States)

    Oviedo-Trespalacios, Oscar; Haque, Md Mazharul; King, Mark; Washington, Simon

    2018-05-29

    This study investigated how situational characteristics typically encountered in the transport system influence drivers' perceived likelihood of engaging in mobile phone multitasking. The impacts of mobile phone tasks, perceived environmental complexity/risk, and drivers' individual differences were evaluated as relevant individual predictors within the behavioral adaptation framework. An innovative questionnaire, which includes randomized textual and visual scenarios, was administered to collect data from a sample of 447 drivers in South East Queensland-Australia (66% females; n = 296). The likelihood of engaging in a mobile phone task across various scenarios was modeled by a random parameters ordered probit model. Results indicated that drivers who are female, are frequent users of phones for texting/answering calls, have less favorable attitudes towards safety, and are highly disinhibited were more likely to report stronger intentions of engaging in mobile phone multitasking. However, more years with a valid driving license, self-efficacy toward self-regulation in demanding traffic conditions and police enforcement, texting tasks, and demanding traffic conditions were negatively related to self-reported likelihood of mobile phone multitasking. The unobserved heterogeneity warned of riskier groups among female drivers and participants who need a lot of convincing to believe that multitasking while driving is dangerous. This research concludes that behavioral adaptation theory is a robust framework explaining self-regulation of distracted drivers. © 2018 Society for Risk Analysis.

  19. Computational Analysis of the G-III Laminar Flow Glove

    Science.gov (United States)

    Malik, Mujeeb R.; Liao, Wei; Lee-Rausch, Elizabeth M.; Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan

    2011-01-01

    Under NASA's Environmentally Responsible Aviation Project, flight experiments are planned with the primary objective of demonstrating the Discrete Roughness Elements (DRE) technology for passive laminar flow control at chord Reynolds numbers relevant to transport aircraft. In this paper, we present a preliminary computational assessment of the Gulfstream-III (G-III) aircraft wing-glove designed to attain natural laminar flow for the leading-edge sweep angle of 34.6deg. Analysis for a flight Mach number of 0.75 shows that it should be possible to achieve natural laminar flow for twice the transition Reynolds number ever achieved at this sweep angle. However, the wing-glove needs to be redesigned to effectively demonstrate passive laminar flow control using DREs. As a by-product of the computational assessment, effect of surface curvature on stationary crossflow disturbances is found to be strongly stabilizing for the current design, and it is suggested that convex surface curvature could be used as a control parameter for natural laminar flow design, provided transition occurs via stationary crossflow disturbances.

  20. Vent clearing analysis of a Mark III pressure suppression containment

    International Nuclear Information System (INIS)

    Quintana, R.

    1979-01-01

    An analysis of the vent clearing transient in a Mark III pressure suppression containment after a hypothetical LOCA is carried out. A two-dimensional numerical model solving the transient fluid dynamic equations is used. The geometry of the pressure suppression pool is represented and the pressure and velocity fields in the pool are obtained from the moment the LOCA occurs until the first vent in the drywell wall clears. The results are compared to those obtained with the one-diemensional model used for containment design, with special interest on two-dimensional effects. Some conclusions concerning the effect of the water discharged into the suppression pool through the vents on submerged structures are obtained. Future improvements to the model are suggested. (orig.)

  1. A Nuclear Ribosomal DNA Phylogeny of Acer Inferred with Maximum Likelihood, Splits Graphs, and Motif Analysis of 606 Sequences

    Science.gov (United States)

    Grimm, Guido W.; Renner, Susanne S.; Stamatakis, Alexandros; Hemleben, Vera

    2007-01-01

    The multi-copy internal transcribed spacer (ITS) region of nuclear ribosomal DNA is widely used to infer phylogenetic relationships among closely related taxa. Here we use maximum likelihood (ML) and splits graph analyses to extract phylogenetic information from ~ 600 mostly cloned ITS sequences, representing 81 species and subspecies of Acer, and both species of its sister Dipteronia. Additional analyses compared sequence motifs in Acer and several hundred Anacardiaceae, Burseraceae, Meliaceae, Rutaceae, and Sapindaceae ITS sequences in GenBank. We also assessed the effects of using smaller data sets of consensus sequences with ambiguity coding (accounting for within-species variation) instead of the full (partly redundant) original sequences. Neighbor-nets and bipartition networks were used to visualize conflict among character state patterns. Species clusters observed in the trees and networks largely agree with morphology-based classifications; of de Jong’s (1994) 16 sections, nine are supported in neighbor-net and bipartition networks, and ten by sequence motifs and the ML tree; of his 19 series, 14 are supported in networks, motifs, and the ML tree. Most nodes had higher bootstrap support with matrices of 105 or 40 consensus sequences than with the original matrix. Within-taxon ITS divergence did not differ between diploid and polyploid Acer, and there was little evidence of differentiated parental ITS haplotypes, suggesting that concerted evolution in Acer acts rapidly. PMID:19455198

  2. A Nuclear Ribosomal DNA Phylogeny of Acer Inferred with Maximum Likelihood, Splits Graphs, and Motif Analysis of 606 Sequences

    Directory of Open Access Journals (Sweden)

    Guido W. Grimm

    2006-01-01

    Full Text Available The multi-copy internal transcribed spacer (ITS region of nuclear ribosomal DNA is widely used to infer phylogenetic relationships among closely related taxa. Here we use maximum likelihood (ML and splits graph analyses to extract phylogenetic information from ~ 600 mostly cloned ITS sequences, representing 81 species and subspecies of Acer, and both species of its sister Dipteronia. Additional analyses compared sequence motifs in Acer and several hundred Anacardiaceae, Burseraceae, Meliaceae, Rutaceae, and Sapindaceae ITS sequences in GenBank. We also assessed the effects of using smaller data sets of consensus sequences with ambiguity coding (accounting for within-species variation instead of the full (partly redundant original sequences. Neighbor-nets and bipartition networks were used to visualize conflict among character state patterns. Species clusters observed in the trees and networks largely agree with morphology-based classifications; of de Jong’s (1994 16 sections, nine are supported in neighbor-net and bipartition networks, and ten by sequence motifs and the ML tree; of his 19 series, 14 are supported in networks, motifs, and the ML tree. Most nodes had higher bootstrap support with matrices of 105 or 40 consensus sequences than with the original matrix. Within-taxon ITS divergence did not differ between diploid and polyploid Acer, and there was little evidence of differentiated parental ITS haplotypes, suggesting that concerted evolution in Acer acts rapidly.

  3. From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline.

    Science.gov (United States)

    Chen, Yunshun; Lun, Aaron T L; Smyth, Gordon K

    2016-01-01

    In recent years, RNA sequencing (RNA-seq) has become a very widely used technology for profiling gene expression. One of the most common aims of RNA-seq profiling is to identify genes or molecular pathways that are differentially expressed (DE) between two or more biological conditions. This article demonstrates a computational workflow for the detection of DE genes and pathways from RNA-seq data by providing a complete analysis of an RNA-seq experiment profiling epithelial cell subsets in the mouse mammary gland. The workflow uses R software packages from the open-source Bioconductor project and covers all steps of the analysis pipeline, including alignment of read sequences, data exploration, differential expression analysis, visualization and pathway analysis. Read alignment and count quantification is conducted using the Rsubread package and the statistical analyses are performed using the edgeR package. The differential expression analysis uses the quasi-likelihood functionality of edgeR.

  4. Measuring the performance of vaccination programs using cross-sectional surveys: a likelihood framework and retrospective analysis.

    Directory of Open Access Journals (Sweden)

    Justin Lessler

    2011-10-01

    Full Text Available The performance of routine and supplemental immunization activities is usually measured by the administrative method: dividing the number of doses distributed by the size of the target population. This method leads to coverage estimates that are sometimes impossible (e.g., vaccination of 102% of the target population, and are generally inconsistent with the proportion found to be vaccinated in Demographic and Health Surveys (DHS. We describe a method that estimates the fraction of the population accessible to vaccination activities, as well as within-campaign inefficiencies, thus providing a consistent estimate of vaccination coverage.We developed a likelihood framework for estimating the effective coverage of vaccination programs using cross-sectional surveys of vaccine coverage combined with administrative data. We applied our method to measles vaccination in three African countries: Ghana, Madagascar, and Sierra Leone, using data from each country's most recent DHS survey and administrative coverage data reported to the World Health Organization. We estimate that 93% (95% CI: 91, 94 of the population in Ghana was ever covered by any measles vaccination activity, 77% (95% CI: 78, 81 in Madagascar, and 69% (95% CI: 67, 70 in Sierra Leone. "Within-activity" inefficiencies were estimated to be low in Ghana, and higher in Sierra Leone and Madagascar. Our model successfully fits age-specific vaccination coverage levels seen in DHS data, which differ markedly from those predicted by naïve extrapolation from country-reported and World Health Organization-adjusted vaccination coverage.Combining administrative data with survey data substantially improves estimates of vaccination coverage. Estimates of the inefficiency of past vaccination activities and the proportion not covered by any activity allow us to more accurately predict the results of future activities and provide insight into the ways in which vaccination programs are failing to meet their

  5. Quantitative risk analysis of the pipeline GASDUC III - solutions

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Edmilson P.; Bettoni, Izabel Cristina [PETROBRAS, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In this work the quantitative risks analysis to the external public of the Pipeline Cabiunas - REDUC (GASDUC III), with 180 km, linking the municipalities of Macae and Duque de Caxias - RJ was performed by the Companies PETROBRAS and ITSEMAP do Brasil. In addition to the large diameter of the pipeline 38 inches and high operation pressure 100 kgf/cm{sup 2} operating with natural gas through several densely populated areas. Initially, the individual risk contours were calculated without considering mitigating measures, obtaining as result the individual risk contour with frequencies of 1x10{sup -06} per year involving sensitive occupations and therefore considered unacceptable when compared with the INEA criterion. The societal risk was calculated for eight densely populated areas and their respective FN-curves situated below the advised limit established by INEA, except for two areas that required the proposal of additional mitigating measures to the reduction of societal risk. Regarding to societal risk, the FN-curve should be below the advised limit presented in the Technical Instruction of INEA. The individual and societal risk were reassessed incorporating some mitigating measures and the results situated below the advised limits established by INEA and PETROBRAS has obtained the license for installation of the pipeline. (author)

  6. Absorption spectra analysis of hydrated uranium(III) complex chlorides

    Science.gov (United States)

    Karbowiak, M.; Gajek, Z.; Drożdżyński, J.

    2000-11-01

    Absorption spectra of powdered samples of hydrated uranium(III) complex chlorides of the formulas NH 4UCl 4 · 4H 2O and CsUCl 4 · 3H 2O have been recorded at 4.2 K in the 4000-26 000 cm -1 range. The analysis of the spectra enabled the determination of crystal-field parameters and assignment of 83 and 77 crystal-field levels for the tetrahydrate and trihydrate, respectively. The energies of the levels were computed by applying a simplified angular overlap model as well as a semiempirical Hamiltonian representing the combined atomic and crystal-field interactions. Ab initio calculations have enabled the application of a simplified parameterization and the determination of the starting values of the AOM parameters. The received results have proved that the AOM approach can quite well predict both the structure of the ground multiplet and the positions of the crystal-field levels in the 17 000-25 000 cm -1 range, usually obscured by strong f-d bands.

  7. CFHTLenS: a Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics

    Science.gov (United States)

    Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.

    2015-05-01

    We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.

  8. QUALIS PERIODIC EVALUATION: ANALYSIS OF QUALIS UPGRADE IN MEDICINE III.

    Science.gov (United States)

    Jukemura, José; Diniz, Márcio Augusto

    2015-01-01

    To evaluate the preliminary results related to journals up-grade that was used by Medicine III, through opportunity offered by Capes to all agency areas programs. Were used area document of Medicine I, II and III, besides other relevant topics available online at Capes site, between 2009 and 2013. The research was focused to answer two questions: 1) the stratification of Qualis is similar in the three areas of medicine? and 2) the evolution of Qualis in Medicine III was higher? Medicine III showed an increase in its Qualis classification and is publishing in journals with higher impact factors, virtually the same as the Medicine I and II. The area showed the strongest growth in recent three-year periods. Avaliar os resultados preliminares sobre a Medicina III do up-grade oportunizado pela Capes para todas as áreas. Foram utilizados os documentos de áreas e os relevantes ao tema disponíveis online no site da Capes entre 2009 e 2013. Procurou-se focar a pesquisa em dois aspectos para responder duas perguntas: 1) a estratificação do Qualis é semelhante nas três áreas da medicina? e 2) a evolução do Qualis da Medicina III foi maior? A Medicina III apresentou evolução em sua classificação Qualis e está publicando em revistas com maior fator de impacto e é praticamente igual ao da Medicina I e II. A área foi a que apresentou maior evolução nestes últimos triênios.

  9. Application of a stratum-specific likelihood ratio analysis in a screen for depression among a community-dwelling population in Japan

    Directory of Open Access Journals (Sweden)

    Sugawara N

    2017-09-01

    Full Text Available Norio Sugawara,1,2 Ayako Kaneda,2 Ippei Takahashi,3 Shigeyuki Nakaji,3 Norio Yasui-Furukori2 1Department of Clinical Epidemiology, Translational Medical Center, National Center of Neurology and Psychiatry, Kodaira, Tokyo, 2Department of Neuropsychiatry, Hirosaki University School of Medicine, Hirosaki, 3Department of Social Medicine, Hirosaki University School of Medicine, Hirosaki, Japan Background: Efficient screening for depression is important in community mental health. In this study, we applied a stratum-specific likelihood ratio (SSLR analysis, which is independent of the prevalence of the target disease, to screen for depression among community-dwelling individuals.Method: The Center for Epidemiologic Studies Depression Scale (CES-D and the Mini International Neuropsychiatric Interview (MINI were administered to 789 individuals (19–87 years of age who participated in the Iwaki Health Promotion Project 2011. Major depressive disorder (MDD was assessed using the MINI.Results: For MDD, the SSLRs were 0.13 (95% CI 0.04–0.40, 3.68 (95% CI 1.37–9.89, and 24.77 (95% CI 14.97–40.98 for CES–D scores of 0–16, 17–20, and above 21, respectively.Conclusion: The validity of the CES-D is confirmed, and SSLR analysis is recommended for its practical value for the detection of individuals with the risk of MDD in the Japanese community. Keywords: screening, depression, Center for Epidemiologic Studies Depression Scale, stratum-specific likelihood ratio

  10. Neural signatures of social conformity: A coordinate-based activation likelihood estimation meta-analysis of functional brain imaging studies.

    Science.gov (United States)

    Wu, Haiyan; Luo, Yi; Feng, Chunliang

    2016-12-01

    People often align their behaviors with group opinions, known as social conformity. Many neuroscience studies have explored the neuropsychological mechanisms underlying social conformity. Here we employed a coordinate-based meta-analysis on neuroimaging studies of social conformity with the purpose to reveal the convergence of the underlying neural architecture. We identified a convergence of reported activation foci in regions associated with normative decision-making, including ventral striatum (VS), dorsal posterior medial frontal cortex (dorsal pMFC), and anterior insula (AI). Specifically, consistent deactivation of VS and activation of dorsal pMFC and AI are identified when people's responses deviate from group opinions. In addition, the deviation-related responses in dorsal pMFC predict people's conforming behavioral adjustments. These are consistent with current models that disagreement with others might evoke "error" signals, cognitive imbalance, and/or aversive feelings, which are plausibly detected in these brain regions as control signals to facilitate subsequent conforming behaviors. Finally, group opinions result in altered neural correlates of valuation, manifested as stronger responses of VS to stimuli endorsed than disliked by others. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  12. Likelihood analysis of the pMSSM11 in light of LHC 13-TeV data

    International Nuclear Information System (INIS)

    Bagnaschi, E.; Sakurai, K.; Borsato, M.

    2017-11-01

    We use MasterCode to perform a frequentist analysis of the constraints on a phenomenological MSSM model with 11 parameters, the pMSSM11, including constraints from ∝ 36/fb of LHC data at 13 TeV and PICO, XENON1T and PandaX-II searches for dark matter scattering, as well as previous accelerator and astrophysical measurements, presenting fits both with and without the (g-2)μ constraint. The pMSSM11 is specified by the following parameters: 3 gaugino masses M 1,2,3 , a common mass for the first-and second-generation squarks m q and a distinct third-generation squark mass m q3 , a common mass for the first-and second-generation sleptons m l and a distinct third-generation slepton mass m τ , a common trilinear mixing parameter A, the Higgs mixing parameter μ, the pseudoscalar Higgs mass M A and tan β. In the fit including (g-2) μ , a Bino-like χ 0 1 is preferred, whereas a Higgsino-like χ 0 1 is favoured when the (g-2)μ constraint is dropped. We identify the mechanisms that operate in different regions of the pMSSM11 parameter space to bring the relic density of the lightest neutralino, χ 0 1 , into the range indicated by cosmological data. In the fit including (g-2)μ, coannihilations with χ 0 2 and the Wino-like χ ± 1 or with nearly-degenerate first- and second-generation sleptons are favoured, whereas coannihilations with the χ 0 2 and the Higgsino-like χ ± 1 or with first- and second-generation squarks may be important when the (g - 2)μ constraint is dropped. Prospects remain for discovering strongly-interacting sparticles at the LHC as well as for discovering electroweakly-interacting sparticles at a future linear e + e - collider such as the ILC or CLIC.

  13. Likelihood analysis of the pMSSM11 in light of LHC 13-TeV data

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Sakurai, K. [Warsaw Univ. (Poland). Inst. of Theoretical Physics; Borsato, M. [Santiago de Compostela Univ. (Spain). Inst. Galego de Fisica de Altas Enerxias; and others

    2017-11-15

    We use MasterCode to perform a frequentist analysis of the constraints on a phenomenological MSSM model with 11 parameters, the pMSSM11, including constraints from ∝ 36/fb of LHC data at 13 TeV and PICO, XENON1T and PandaX-II searches for dark matter scattering, as well as previous accelerator and astrophysical measurements, presenting fits both with and without the (g-2)μ constraint. The pMSSM11 is specified by the following parameters: 3 gaugino masses M{sub 1,2,3}, a common mass for the first-and second-generation squarks m{sub q} and a distinct third-generation squark mass m{sub q3}, a common mass for the first-and second-generation sleptons m{sub l} and a distinct third-generation slepton mass m{sub τ}, a common trilinear mixing parameter A, the Higgs mixing parameter μ, the pseudoscalar Higgs mass M{sub A} and tan β. In the fit including (g-2){sub μ}, a Bino-like χ{sup 0}{sub 1} is preferred, whereas a Higgsino-like χ{sup 0}{sub 1} is favoured when the (g-2)μ constraint is dropped. We identify the mechanisms that operate in different regions of the pMSSM11 parameter space to bring the relic density of the lightest neutralino, χ{sup 0}{sub 1}, into the range indicated by cosmological data. In the fit including (g-2)μ, coannihilations with χ{sup 0}{sub 2} and the Wino-like χ{sup ±}{sub 1} or with nearly-degenerate first- and second-generation sleptons are favoured, whereas coannihilations with the χ{sup 0}{sub 2} and the Higgsino-like χ{sup ±}{sub 1} or with first- and second-generation squarks may be important when the (g - 2)μ constraint is dropped. Prospects remain for discovering strongly-interacting sparticles at the LHC as well as for discovering electroweakly-interacting sparticles at a future linear e{sup +}e{sup -} collider such as the ILC or CLIC.

  14. Sequence analysis of mitochondrial DNA hypervariable region III of ...

    African Journals Online (AJOL)

    The aims of this research were to study mitochondrial DNA hypervariable region III and establish the degree of variation characteristic of a fragment. The mitochondrial DNA (mtDNA) is a small circular genome located within the mitochondria in the cytoplasm of the cell and a smaller 1.2 kb pair fragment, called the control ...

  15. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    Shinohara, Yoshikuni; Hirota, Jitsuya

    1984-02-01

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  16. Confirmatory Factor Analysis of the WISC-III with Child Psychiatric Inpatients.

    Science.gov (United States)

    Tupa, David J.; Wright, Margaret O'Dougherty; Fristad, Mary A.

    1997-01-01

    Factor models of the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) for one, two, three, and four factors were tested using confirmatory factor analysis with a sample of 177 child psychiatric inpatients. The four-factor model proposed in the WISC-III manual provided the best fit to the data. (SLD)

  17. [Cephalometric analysis in cases with Class III malocclusions].

    Science.gov (United States)

    Rak, D

    1989-01-01

    Various orthodontic class III anomalies, classified into several experimental groups, and eugnathic occlusions serving as controls were studied by roentgencephalometry. The objective of the study was to detect possible distinctions in the quantitative values of two variables chosen and to select the variables which most significantly discriminate the group of class III orthodontic anomalies. Attempts were made to ascertain whether or not there were sex-related differences. The teleroentgenograms of 269 examines, aged 10-18 years, of both sexes were analyzed. The experimental group consisted of 89 examinees class III orthodontic anomalies. The control group consisted of 180 examines with eugnathic occlusion. Latero-lateral skull roentgenograms were taken observing the rules of roentgenocephalometry. Using acetate paper, the drawings of profile teleroentgenograms were elaborated and the reference points and lines were entered. A total of 38 variables were analyzed, of which there were 10 linear, 19 angular, and 8 variables were obtained by mathematical calculation; the age variable was also analyzed. In statistical analyses an electronic computer was used. The results are presented in tables and graphs. The results obtained showed that: --compared to the findings in the control group, the subjects in the experimental group displayed significant changes in the following craniofacial characteristics a negative difference in the position of the apical base of the jaw, manifest concavity of the osseous profile and diminished convexity of the profile of soft parts, retroinclination of the lower incisors, mandibular prognathism, increased mandibular angle and increased mandibular proportion compared to maxillary and the anterior cranial base; --with regard to the sex of the examinees, only four linear variables of significantly discriminating character were selected, so that in can be concluded that there were no significant sex differences among the morphological

  18. Revised and extended analysis of doubly ionized selenium: Se III

    International Nuclear Information System (INIS)

    Tauheed, A; Hala

    2012-01-01

    The spectrum of selenium was recorded on a 3 m normal incidence vacuum spectrograph of the Antigonish laboratory (Canada) in the wavelength region 300-2080 Å using a triggered spark source. The theoretical structure of doubly ionized selenium (Se III) was predicted by Cowan's multi-configuration interaction code. The ground configuration of Se III is 4s 2 4p 2 and the excited configurations are of the type 4s 2 4pnd (n≥4), 4s 2 4pns (n≥5) and the inner shell excitation gives rise to the 4s4p 3 configuration. The 4s 2 4p 2 -[4s4p 3 +4s 2 4p (4d+5d+6d+7d+5s+6s+7s+8s)] transition array has been analyzed. Several earlier reported levels have been revised and four new configurations have been added. All the levels of these configurations have been established. More than 180 spectral lines have been identified in this spectrum. A total of 75 energy levels belonging to the above-mentioned configurations have been established. Least-squares fitted parametric and Hartree-Fock calculations were used to interpret the observed spectrum. Excellent agreement with theoretical calculations was noticed. The standard deviation of least-squares fit is only 110 cm -1 . The ionization potential of Se III was found to be 255 650±150 cm -1 (31.696±0.018 eV). The accuracy of our wavelengths for sharp lines is better than ±0.005 Å.

  19. Analysis of a retrieved delta III total shoulder prosthesis.

    Science.gov (United States)

    Nyffeler, R W; Werner, C M L; Simmen, B R; Gerber, C

    2004-11-01

    A reversed Delta III total shoulder prosthesis was retrieved post-mortem, eight months after implantation. A significant notch was evident at the inferior pole of the scapular neck which extended beyond the inferior fixation screw. This bone loss was associated with a corresponding, erosive defect of the polyethylene cup. Histological examination revealed a chronic foreign-body reaction in the joint capsule. There were, however, no histological signs of loosening of the glenoid base plate and the stability of the prosthetic articulation was only slightly reduced by the eroded rim of the cup.

  20. Implementation and assessment of a likelihood ratio approach for the evaluation of LA-ICP-MS evidence in forensic glass analysis.

    Science.gov (United States)

    van Es, Andrew; Wiarda, Wim; Hordijk, Maarten; Alberink, Ivo; Vergeer, Peter

    2017-05-01

    For the comparative analysis of glass fragments, a method using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) is in use at the NFI, giving measurements of the concentration of 18 elements. An important question is how to evaluate the results as evidence that a glass sample originates from a known glass source or from an arbitrary different glass source. One approach is the use of matching criteria e.g. based on a t-test or overlap of confidence intervals. An important drawback of this method is the fact that the rarity of the glass composition is not taken into account. A similar match can have widely different evidential values. In addition the use of fixed matching criteria can give rise to a "fall off the cliff" effect. Small differences may result in a match or a non-match. In this work a likelihood ratio system is presented, largely based on the two-level model as proposed by Aitken and Lucy [1], and Aitken, Zadora and Lucy [2]. Results show that the output from the two-level model gives good discrimination between same and different source hypotheses, but a post-hoc calibration step is necessary to improve the accuracy of the likelihood ratios. Subsequently, the robustness and performance of the LR system are studied. Results indicate that the output of the LR system is robust to the sample properties of the dataset used for calibration. Furthermore, the empirical upper and lower bound method [3], designed to deal with extrapolation errors in the density models, results in minimum and maximum values of the LR outputted by the system of 3.1×10 -3 and 3.4×10 4 . Calibration of the system, as measured by empirical cross-entropy, shows good behavior over the complete prior range. Rates of misleading evidence are small: for same-source comparisons, 0.3% of LRs support a different-source hypothesis; for different-source comparisons, 0.2% supports a same-source hypothesis. The authors use the LR system in reporting of glass cases to

  1. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients.

    Directory of Open Access Journals (Sweden)

    Cosimo eUrgesi

    2014-05-01

    Full Text Available Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding.

  2. Likelihood devices in spatial statistics

    NARCIS (Netherlands)

    Zwet, E.W. van

    1999-01-01

    One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments

  3. Morphometric Analysis of the Mandible in Subjects with Class III Malocclusion

    Directory of Open Access Journals (Sweden)

    Jin-Yun Pan

    2006-07-01

    Full Text Available This study evaluated the deformations that contribute to Class III mandibular configuration, employing geometric morphometric analysis. Lateral cephalograms of male and female groups of 100 young adults and 70 children with Class III malocclusion were compared to those of counterparts with normal occlusion. The sample included an equal number of both genders. The cephalographs were traced, and 12 homologous landmarks were identified and digitized. Average mandibular geometries were generated by means of Procrustes analysis. Thin-plate spline analysis was then applied to mandibular configurations to determine local form differences in male and female groups of adults and children with normal occlusion and Class III malocclusion. The mandibular morphology was significantly different between these two groups of male and female adults, and children (p < 0.0001. This spline analysis revealed an anteroposterior elongation of the mandible along the condylion-gnathion axis, showing an extension in the regions of the mandibular condyle and ramus, and of the anteroinferior portion of the mandibular symphysis in Class III groups. More extension was evident in Class III adults. The deformations in subjects with Class III malocclusion may represent a developmental elongation of the mandible anteroposteriorly, which leads to the appearance of a prognathic mandibular profile.

  4. Sulphate analysis in uranium leach iron(III) chloride solutions by inductively coupled argon plasma spectrometry

    International Nuclear Information System (INIS)

    Nirdosh, I.; Lakhani, S.; Yunus, M.Z.M.

    1993-01-01

    Inductively coupled Argon Plasma Spectrometry is used for the indirect determination of sulphate in iron(III) chloride leach solution of Elliot Lake uranium ores via addition of a known amount of barium ions and analyzing for excess of barium. The ore contains ∼ 7 wt% pyrite, FeS 2 , as the major mineral which oxidizes to generate sulphate during leaching with Fe(III). The effects of pH, the concentrations of Fe(III) and chloride ions and for presence of ethanol in the test samples on the accuracy of analysis are studied. It is found that unlike the Rhodizonate method, removal of iron(III) from or addition of ethanol to the test sample prior to analysis are not required. Linear calibration curves are obtained. (author)

  5. Application of the subchannel analysis code COBRA III C for liquid sodium

    International Nuclear Information System (INIS)

    Nissen, K.L.

    1981-01-01

    The subchannel-analysis code COBRA III C was developed to gain knowledge of mass flow and temperature distribution in rod bundles of light water reactors. A comparison of experimental results for the temperature distribution in a 19 rod bundle with calculations done by the computer program shows the capability of COBRA III C to handle liquid sodium cooling. The code needs sodium properties as well as changed correlations for turbulent mixing and heat transfer at the rod. (orig.) [de

  6. Full likelihood analysis of genetic risk with variable age at onset disease--combining population-based registry data and demographic information.

    Directory of Open Access Journals (Sweden)

    Janne Pitkäniemi

    Full Text Available BACKGROUND: In genetic studies of rare complex diseases it is common to ascertain familial data from population based registries through all incident cases diagnosed during a pre-defined enrollment period. Such an ascertainment procedure is typically taken into account in the statistical analysis of the familial data by constructing either a retrospective or prospective likelihood expression, which conditions on the ascertainment event. Both of these approaches lead to a substantial loss of valuable data. METHODOLOGY AND FINDINGS: Here we consider instead the possibilities provided by a Bayesian approach to risk analysis, which also incorporates the ascertainment procedure and reference information concerning the genetic composition of the target population to the considered statistical model. Furthermore, the proposed Bayesian hierarchical survival model does not require the considered genotype or haplotype effects be expressed as functions of corresponding allelic effects. Our modeling strategy is illustrated by a risk analysis of type 1 diabetes mellitus (T1D in the Finnish population-based on the HLA-A, HLA-B and DRB1 human leucocyte antigen (HLA information available for both ascertained sibships and a large number of unrelated individuals from the Finnish bone marrow donor registry. The heterozygous genotype DR3/DR4 at the DRB1 locus was associated with the lowest predictive probability of T1D free survival to the age of 15, the estimate being 0.936 (0.926; 0.945 95% credible interval compared to the average population T1D free survival probability of 0.995. SIGNIFICANCE: The proposed statistical method can be modified to other population-based family data ascertained from a disease registry provided that the ascertainment process is well documented, and that external information concerning the sizes of birth cohorts and a suitable reference sample are available. We confirm the earlier findings from the same data concerning the HLA-DR3

  7. Comparative genomic analysis of the WRKY III gene family in populus, grape, arabidopsis and rice.

    Science.gov (United States)

    Wang, Yiyi; Feng, Lin; Zhu, Yuxin; Li, Yuan; Yan, Hanwei; Xiang, Yan

    2015-09-08

    WRKY III genes have significant functions in regulating plant development and resistance. In plant, WRKY gene family has been studied in many species, however, there still lack a comprehensive analysis of WRKY III genes in the woody plant species poplar, three representative lineages of flowering plant species are incorporated in most analyses: Arabidopsis (a model plant for annual herbaceous dicots), grape (one model plant for perennial dicots) and Oryza sativa (a model plant for monocots). In this study, we identified 10, 6, 13 and 28 WRKY III genes in the genomes of Populus trichocarpa, grape (Vitis vinifera), Arabidopsis thaliana and rice (Oryza sativa), respectively. Phylogenetic analysis revealed that the WRKY III proteins could be divided into four clades. By microsynteny analysis, we found that the duplicated regions were more conserved between poplar and grape than Arabidopsis or rice. We dated their duplications by Ks analysis of Populus WRKY III genes and demonstrated that all the blocks were formed after the divergence of monocots and dicots. Strong purifying selection has played a key role in the maintenance of WRKY III genes in Populus. Tissue expression analysis of the WRKY III genes in Populus revealed that five were most highly expressed in the xylem. We also performed quantitative real-time reverse transcription PCR analysis of WRKY III genes in Populus treated with salicylic acid, abscisic acid and polyethylene glycol to explore their stress-related expression patterns. This study highlighted the duplication and diversification of the WRKY III gene family in Populus and provided a comprehensive analysis of this gene family in the Populus genome. Our results indicated that the majority of WRKY III genes of Populus was expanded by large-scale gene duplication. The expression pattern of PtrWRKYIII gene identified that these genes play important roles in the xylem during poplar growth and development, and may play crucial role in defense to drought

  8. Photovoltaic venture analysis. Final report. Volume III. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Costello, D.; Posner, D.; Schiffel, D.; Doane, J.; Bishop, C.

    1978-07-01

    This appendix contains a brief summary of a detailed description of alternative future energy scenarios which provide an overall backdrop for the photovoltaic venture analysis. Also included is a summary of a photovoltaic market/demand workshop, a summary of a photovoltaic supply workshop which used cross-impact analysis, and a report on photovoltaic array and system prices in 1982 and 1986. The results of a sectorial demand analysis for photovoltaic power systems used in the residential sector (single family homes), the service, commercial, and institutional sector (schools), and in the central power sector are presented. An analysis of photovoltaics in the electric utility market is given, and a report on the industrialization of photovoltaic systems is included. A DOE information memorandum regarding ''A Strategy for a Multi-Year Procurement Initiative on Photovoltaics (ACTS No. ET-002)'' is also included. (WHK)

  9. Control Analysis of Hazards Potential in Crude Distiller Unit III PT. Pertamina (Persero) Refinery Unit III Plaju Tahun 2011

    OpenAIRE

    Matariani, Ade; Hasyim, Hamzah; Faisya, Achmad Fickry

    2012-01-01

    Background: Activities in CDU III are very risk to any hazards potential; because of that hazards potential is much needed in controlling the hazards potential to decrease the accidents and occupational diseases. The aim of this study was to analyze the controlling of hazards potential in CDU III PT. Pertamina (Persero) RU III Plaju in 2011. Method: This study was a qualitative study. The methods of data collection were using in-depth interview and observation. The total of informants in this...

  10. Synthesis and physicochemical analysis of Sm (II, III) acetylacetone chelate complexes

    International Nuclear Information System (INIS)

    Kostyuk, N.N.; Dik, T.A.; Trebnikov, A.G.

    2004-01-01

    Sm (II, III) acetylacetone chelate complexes were synthesized by electrochemical method. It was shown that anode dissolution of the metal samarium over acetylacetone leads to formation of the Sm (II, III) chelate complexes: xSm(acac)2 · ySm(acac)3 · zH(acac). Factors x, y and z depend on quantity of the electricity, which flew through the electrolysis cell. The compositions of the obtained substances were confirmed by the physicochemical analysis (ultimate analysis, IR-, mass spectroscopy and thermal analysis (thermogravimetric, isothermal warming-up and differential scanning colorimetry). (Authors)

  11. Nondestructive neutron activation analysis of mineral materials. III

    International Nuclear Information System (INIS)

    Randa, Z.; Benada, J.; Kuncir, J.; Vobecky, M.

    1979-01-01

    A description is presented of sampling, calibration standards, the method of activation and measurement, activation product identification, the respective nuclear reactions, interfering admixtures, and pre-activation operations. The analysis is described of sulphides, halogenides, oxides, sulphates, carbonates, phosphates, silicates, aluminosilicates, composite minerals containing lanthanides, rocks, tektites, meteors, and plant materials. The method allows determining mainly F, Mg, Al, Ti, V, Nb, Rh, and I which cannot be determined by long-term activation (LTA). It is more sensitive than LTA in determining Ca, Cu, In, and Dy. The analysis takes less time, irradiation and measurement are less costly. The main mineral components are quickly found. (M.K.)

  12. Morphometric Analysis of Mandibular Growth in Skeletal Class III Malocclusion

    Directory of Open Access Journals (Sweden)

    Jenny Zwei-Chieng Chang

    2006-01-01

    Conclusion: We conclude that thin-plate spline analysis and the finite element morphometric method are efficient for the localization and quantification of size and shape changes that occur during mandibular growth. Plots of maximum and minimum principal directions can provide useful information about the trends of growth changes.

  13. Adiabatic analysis of collisions. III. Remarks on the spin model

    International Nuclear Information System (INIS)

    Fano, U.

    1979-01-01

    Analysis of a spin-rotation model illustrates how transitions between adiabatic channel states stem from the second, rather than from the first, rate of change of these states, provided that appropriate identification of channels and scaling of the independent variable are used. These remarks, like the earlier development of a post-adiabatic approach, aim at elucidating the surprising success of approximate separation of variables in the treatment of complex mechanical systems

  14. Data assimilation and uncertainty analysis of environmental assessment problems--an application of Stochastic Transfer Function and Generalised Likelihood Uncertainty Estimation techniques

    International Nuclear Information System (INIS)

    Romanowicz, Renata; Young, Peter C.

    2003-01-01

    Stochastic Transfer Function (STF) and Generalised Likelihood Uncertainty Estimation (GLUE) techniques are outlined and applied to an environmental problem concerned with marine dose assessment. The goal of both methods in this application is the estimation and prediction of the environmental variables, together with their associated probability distributions. In particular, they are used to estimate the amount of radionuclides transferred to marine biota from a given source: the British Nuclear Fuel Ltd (BNFL) repository plant in Sellafield, UK. The complexity of the processes involved, together with the large dispersion and scarcity of observations regarding radionuclide concentrations in the marine environment, require efficient data assimilation techniques. In this regard, the basic STF methods search for identifiable, linear model structures that capture the maximum amount of information contained in the data with a minimal parameterisation. They can be extended for on-line use, based on recursively updated Bayesian estimation and, although applicable to only constant or time-variable parameter (non-stationary) linear systems in the form used in this paper, they have the potential for application to non-linear systems using recently developed State Dependent Parameter (SDP) non-linear STF models. The GLUE based-methods, on the other hand, formulate the problem of estimation using a more general Bayesian approach, usually without prior statistical identification of the model structure. As a result, they are applicable to almost any linear or non-linear stochastic model, although they are much less efficient both computationally and in their use of the information contained in the observations. As expected in this particular environmental application, it is shown that the STF methods give much narrower confidence limits for the estimates due to their more efficient use of the information contained in the data. Exploiting Monte Carlo Simulation (MCS) analysis

  15. Birmingham COPD Cohort: a cross-sectional analysis of the factors associated with the likelihood of being in paid employment among people with COPD

    Directory of Open Access Journals (Sweden)

    Rai KK

    2017-01-01

    Full Text Available Kiran K Rai,1 Rachel E Jordan,1 W Stanley Siebert,2 Steven S Sadhra,3 David A Fitzmaurice,1 Alice J Sitch,1 Jon G Ayres,1,3 Peymané Adab1 1Institute of Applied Health Research, 2The Department of Business and Labour Economics, 3Institute of Clinical Sciences, University of Birmingham, Edgbaston, Birmingham, UK Background: Employment rates among those with chronic obstructive pulmonary disease (COPD are lower than those without COPD, but little is known about the factors that affect COPD patients’ ability to work. Methods: Multivariable analysis of the Birmingham COPD Cohort Study baseline data was used to assess the associations between lifestyle, clinical, and occupational characteristics and likelihood of being in paid employment among working-age COPD patients. Results: In total, 608 of 1,889 COPD participants were of working age, of whom 248 (40.8% were in work. Older age (60–64 years vs 30–49 years: odds ratio [OR] =0.28; 95% confidence interval [CI] =0.12–0.65, lower educational level (no formal qualification vs degree/higher level: OR =0.43; 95% CI =0.19–0.97, poorer prognostic score (highest vs lowest quartile of modified body mass index, airflow obstruction, dyspnea, and exercise (BODE score: OR =0.10; 95% CI =0.03–0.33, and history of high occupational exposure to vapors, gases, dusts, or fumes (VGDF; high VGDF vs no VGDF exposure: OR =0.32; 95% CI =0.12–0.85 were associated with a lower probability of being employed. Only the degree of breathlessness of BODE was significantly associated with employment. Conclusion: This is the first study to comprehensively assess the characteristics associated with employment in a community sample of people with COPD. Future interventions should focus on managing breathlessness and reducing occupational exposures to VGDF to improve the work capability among those with COPD. Keywords: chronic obstructive pulmonary disease, work, employed, breathlessness, severity, VGDF, UK

  16. Extended likelihood inference in reliability

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.

    1978-10-01

    Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist

  17. Predicting Porosity and Permeability for the Canyon Formation, SACROC Unit (Kelly-Snyder Field), Using the Geologic Analysis via Maximum Likelihood System

    International Nuclear Information System (INIS)

    Reinaldo Gonzalez; Scott R. Reeves; Eric Eslinger

    2007-01-01

    , with high vertical resolution, could be generated for many wells. This procedure permits to populate any well location with core-scale estimates of P and P and rock types facilitating the application of geostatistical characterization methods. The first step procedure was to discriminate rock types of similar depositional environment and/or reservoir quality (RQ) using a specific clustering technique. The approach implemented utilized a model-based, probabilistic clustering analysis procedure called GAMLS1,2,3,4 (Geologic Analysis via Maximum Likelihood System) which is based on maximum likelihood principles. During clustering, samples (data at each digitized depth from each well) are probabilistically assigned to a previously specified number of clusters with a fractional probability that varies between zero and one

  18. Computational analysis of the SRS Phase III salt disposition alternatives

    International Nuclear Information System (INIS)

    Dimenna, R.A.

    2000-01-01

    In late 1997, the In-Tank Precipitation (ITP), facility was shut down and an evaluation of alternative methods to process the liquid high-level waste stored in the Savannah River Site High-Level Waste storage tanks was begun. The objective was to determine whether another process might avoid the operational difficulties encountered with ITP for a lower cost than modifying the existing structured approach to evaluating proposed alternatives on a common basis to identify the best one. Results from the computational analysis were a key part of the input used to select a primary and a secondary salt disposition alternative. This paper describes the process by which the computation needs were identified, addressed, and accomplished with a limited staff under stringent schedule constraints

  19. NEW SUNS IN THE COSMOS. III. MULTIFRACTAL SIGNATURE ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, D. B. de; Nepomuceno, M. M. F.; Junior, P. R. V. de Moraes; Chagas, M. L. Das; Bravo, J. P.; Costa, A. D.; Martins, B. L. Canto; Medeiros, J. R. De [Departamento de Física, Universidade Federal do Rio Grande do Norte, 59072-970 Natal, RN (Brazil); Lopes, C. E. F. [SUPA Wide-Field Astronomy Unit, Institute for Astronomy, School of Physics and Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom); Leão, I. C. [European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching (Germany)

    2016-11-01

    In the present paper, we investigate the multifractality signatures in hourly time series extracted from the CoRoT spacecraft database. Our analysis is intended to highlight the possibility that astrophysical time series can be members of a particular class of complex and dynamic processes, which require several photometric variability diagnostics to characterize their structural and topological properties. To achieve this goal, we search for contributions due to a nonlinear temporal correlation and effects caused by heavier tails than the Gaussian distribution, using a detrending moving average algorithm for one-dimensional multifractal signals (MFDMA). We observe that the correlation structure is the main source of multifractality, while heavy-tailed distribution plays a minor role in generating the multifractal effects. Our work also reveals that the rotation period of stars is inherently scaled by the degree of multifractality. As a result, analyzing the multifractal degree of the referred series, we uncover an evolution of multifractality from shorter to larger periods.

  20. The behavior of the likelihood ratio test for testing missingness

    OpenAIRE

    Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert

    2003-01-01

    To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms

  1. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    accuracy, demographic parameters from three simulated data sets that vary in the magnitude of a founder event and a skew in the effective population size of the X chromosome relative to the autosomes. The behavior of the Markov chain is also examined and shown to convergence to its stationary distribution, while also showing high levels of parameter mixing. The analysis of three pairwise comparisons of sub-Saharan African human populations with non-African human populations do not provide unequivocal support for a strong non-African founder event from these nuclear data. The estimates do however suggest a skew in the ratio of X chromosome to autosome effective population size that is greater than one. However in all three cases, the 95% highest posterior density interval for this ratio does include three-fourths, the value expected under an equal breeding sex ratio. Conclusion The implementation of composite and approximate likelihood methods in a framework that includes MCMCMC demographic parameter estimation shows great promise for being flexible and computationally efficient enough to scale up to the level of whole-genome polymorphism and divergence analysis. Further work must be done to characterize the effects of the assumption of linkage equilibrium among genomic regions that is crucial to the validity of applying the composite likelihood method.

  2. Ethnicity and skeletal Class III morphology: a pubertal growth analysis using thin-plate spline analysis.

    Science.gov (United States)

    Alkhamrah, B; Terada, K; Yamaki, M; Ali, I M; Hanada, K

    2001-01-01

    A longitudinal retrospective study using thin-plate spline analysis was used to investigate skeletal Class III etiology in Japanese female adolescents. Headfilms of 40 subjects were chosen from the archives of the Orthodontic department at Niigata University Dental Hospital, and were traced at IIIB and IVA Hellman dental ages. Twenty-eight homologous landmarks, representing hard and soft tissue, were digitized. These were used to reproduce a consensus for the profilogram, craniomaxillary complex, mandible, and soft tissue for each age and skeletal group. Generalized least-square analysis revealed a significant shape difference between age-matched groups (P spline and partial warps (PW)3 and 2 showed a maxillary retrusion at stage IIIB opposite an acute cranial base at stage IVA. Mandibular total spline and PW4, 5 showed changes affecting most landmarks and their spatial interrelationship, especially a stretch along the articulare-pogonion axis. In soft tissue analysis, PW8 showed large and local changes which paralleled the underlying hard tissue components. Allometry of the mandible and anisotropy of the cranial base, the maxilla, and the mandible asserted the complexity of craniofacial growth and the difficulty of predicting its outcome.

  3. Factors Affecting Adjuvant Therapy in Stage III Pancreatic Cancer—Analysis of the National Cancer Database

    Directory of Open Access Journals (Sweden)

    Mridula Krishnan

    2017-08-01

    Full Text Available Background: Adjuvant therapy after curative resection is associated with survival benefit in stage III pancreatic cancer. We analyzed the factors affecting the outcome of adjuvant therapy in stage III pancreatic cancer and compared overall survival with different modalities of adjuvant treatment. Methods: This is a retrospective study of patients with stage III pancreatic cancer listed in the National Cancer Database (NCDB who were diagnosed between 2004 and 2012. Patients were stratified based on adjuvant therapy they received. Unadjusted Kaplan-Meier and multivariable Cox regression analysis were performed. Results: We analyzed a cohort included 1731 patients who were recipients of adjuvant therapy for stage III pancreatic cancer within the limits of our database. Patients who received adjuvant chemoradiation had the longest postdiagnosis survival time, followed by patients who received adjuvant chemotherapy, and finally patients who received no adjuvant therapy. On multivariate analysis, advancing age and patients with Medicaid had worse survival, whereas Spanish origin and lower Charlson comorbidity score had better survival. Conclusions: Our study is the largest trial using the NCDB addressing the effects of adjuvant therapy specifically in stage III pancreatic cancer. Within the limits of our study, survival benefit with adjuvant therapy was more apparent with longer duration from date of diagnosis.

  4. Stability of Tl(III) in the context of speciation analysis of thallium in plants.

    Science.gov (United States)

    Sadowska, Monika; Biaduń, Ewa; Krasnodębska-Ostręga, Beata

    2016-02-01

    The paper presents both "good" and "bad" results obtained during speciation analysis of thallium in plant tissues of a hyperaccumulator of this metal. The object was white mustard - Sinapis alba L. In this plant there were found traces of trivalent thallium. The crucial point of this study (especially in the case of so unstable thallium form as Tl(III)) was to prove that the presence of Tl(III) was not caused by the procedure of sample preparation itself, and that the whole analytical method provides reliable results. Choice of the method for conservation of the initial speciation, extraction with the highest efficiency and proving the correctness of the obtained data were the most difficult parts of the presented study. It was found that: both freezing and drying cause significant changes in the speciation of thallium; quantitative analysis could be performed only with fresh tissues of mustard plants; only short-term storage of an extract from fresh plant tissues is possible; the methodology is not the source of thallium (III); only the presence of DTPA can greatly limit the reduction of TI(III) to TI(I) (up to 1-3%); the UV irradiation results in disintegration of TI(III)DTPA in the presence of plant matrix (reduction up to 90%). Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Analysis of gold(I/III)-complexes by HPLC-ICP-MS demonstrates gold(III) stability in surface waters.

    Science.gov (United States)

    Ta, Christine; Reith, Frank; Brugger, Joël; Pring, Allan; Lenehan, Claire E

    2014-05-20

    Understanding the form in which gold is transported in surface- and groundwaters underpins our understanding of gold dispersion and (bio)geochemical cycling. Yet, to date, there are no direct techniques capable of identifying the oxidation state and complexation of gold in natural waters. We present a reversed phase ion-pairing HPLC-ICP-MS method for the separation and determination of aqueous gold(III)-chloro-hydroxyl, gold(III)-bromo-hydroxyl, gold(I)-thiosulfate, and gold(I)-cyanide complexes. Detection limits for the gold species range from 0.05 to 0.30 μg L(-1). The [Au(CN)2](-) gold cyanide complex was detected in five of six waters from tailings and adjacent monitoring bores of working gold mines. Contrary to thermodynamic predictions, evidence was obtained for the existence of Au(III)-complexes in circumneutral, hypersaline waters of a natural lake overlying a gold deposit in Western Australia. This first direct evidence for the existence and stability of Au(III)-complexes in natural surface waters suggests that Au(III)-complexes may be important for the transport and biogeochemical cycling of gold in surface environments. Overall, these results show that near-μg L(-1) enrichments of Au in environmental waters result from metastable ligands (e.g., CN(-)) as well as kinetically controlled redox processes leading to the stability of highly soluble Au(III)-complexes.

  6. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  7. CLONING, EXPRESSION, AND MUTATIONAL ANALYSIS OF RAT S-ADENOSYL-1-METHIONINE: ARSENIC (III) METHYLTRANSFERASE

    Science.gov (United States)

    CLONING, EXPRESSION, AND MUTATIONAL ANALYSIS OF RAT S-ADENOSYL-L-METHIONINE: ARSENIC(III) METHYLTRANSFERASEStephen B. Waters, Ph.D., Miroslav Styblo, Ph.D., Melinda A. Beck, Ph.D., University of North Carolina at Chapel Hill; David J. Thomas, Ph.D., U.S. Environmental...

  8. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...

  9. Thin-plate spline analysis of the cranial base in subjects with Class III malocclusion.

    Science.gov (United States)

    Singh, G D; McNamara, J A; Lozanoff, S

    1997-08-01

    The role of the cranial base in the emergence of Class III malocclusion is not fully understood. This study determines deformations that contribute to a Class III cranial base morphology, employing thin-plate spline analysis on lateral cephalographs. A total of 73 children of European-American descent aged between 5 and 11 years of age with Class III malocclusion were compared with an equivalent group of subjects with a normal, untreated, Class I molar occlusion. The cephalographs were traced, checked and subdivided into seven age- and sex-matched groups. Thirteen points on the cranial base were identified and digitized. The datasets were scaled to an equivalent size, and statistical analysis indicated significant differences between average Class I and Class III cranial base morphologies for each group. Thin-plate spline analysis indicated that both affine (uniform) and non-affine transformations contribute toward the total spline for each average cranial base morphology at each age group analysed. For non-affine transformations, Partial warps 10, 8 and 7 had high magnitudes, indicating large-scale deformations affecting Bolton point, basion, pterygo-maxillare, Ricketts' point and articulare. In contrast, high eigenvalues associated with Partial warps 1-3, indicating localized shape changes, were found at tuberculum sellae, sella, and the frontonasomaxillary suture. It is concluded that large spatial-scale deformations affect the occipital complex of the cranial base and sphenoidal region, in combination with localized distortions at the frontonasal suture. These deformations may contribute to reduced orthocephalization or deficient flattening of the cranial base antero-posteriorly that, in turn, leads to the formation of a Class III malocclusion.

  10. Analysis of Prognostic Factors and Patterns of Recurrence in Patients With Pathologic Stage III Endometrial Cancer

    International Nuclear Information System (INIS)

    Patel, Samir; Portelance, Lorraine; Gilbert, Lucy; Tan, Leonard; Stanimir, Gerald; Duclos, Marie; Souhami, Luis

    2007-01-01

    Purpose: To retrospectively assess prognostic factors and patterns of recurrence in patients with pathologic Stage III endometrial cancer. Methods and Materials: Between 1989 and 2003, 107 patients with pathologic International Federation of Gynecology and Obstetrics Stage III endometrial adenocarcinoma confined to the pelvis were treated at our institution. Adjuvant radiotherapy (RT) was delivered to 68 patients (64%). The influence of multiple patient- and treatment-related factors on pelvic and distant control and overall survival (OS) was evaluated. Results: Median follow-up for patients at risk was 41 months. Five-year actuarial OS was significantly improved in patients treated with adjuvant RT (68%) compared with those with resection alone (50%; p = 0.029). Age, histology, grade, uterine serosal invasion, adnexal involvement, number of extrauterine sites, and treatment with adjuvant RT predicted for improved survival in univariate analysis. Multivariate analysis revealed that grade, uterine serosal invasion, and treatment with adjuvant RT were independent predictors of survival. Five-year actuarial pelvic control was improved significantly with the delivery of adjuvant RT (74% vs. 49%; p = 0.011). Depth of myometrial invasion and treatment with adjuvant RT were independent predictors of pelvic control in multivariate analysis. Conclusions: Multiple prognostic factors predicting for the outcome of pathologic Stage III endometrial cancer patients were identified in this analysis. In particular, delivery of adjuvant RT seems to be a significant independent predictor for improved survival and pelvic control, suggesting that pelvic RT should be routinely considered in the management of these patients

  11. Ego involvement increases doping likelihood.

    Science.gov (United States)

    Ring, Christopher; Kavussanu, Maria

    2018-08-01

    Achievement goal theory provides a framework to help understand how individuals behave in achievement contexts, such as sport. Evidence concerning the role of motivation in the decision to use banned performance enhancing substances (i.e., doping) is equivocal on this issue. The extant literature shows that dispositional goal orientation has been weakly and inconsistently associated with doping intention and use. It is possible that goal involvement, which describes the situational motivational state, is a stronger determinant of doping intention. Accordingly, the current study used an experimental design to examine the effects of goal involvement, manipulated using direct instructions and reflective writing, on doping likelihood in hypothetical situations in college athletes. The ego-involving goal increased doping likelihood compared to no goal and a task-involving goal. The present findings provide the first evidence that ego involvement can sway the decision to use doping to improve athletic performance.

  12. Further Analysis on the Mystery of the Surveyor III Dust Deposits

    Science.gov (United States)

    Metzger, Philip; Hintze, Paul; Trigwell, Steven; Lane, John

    2012-01-01

    The Apollo 12 lunar module (LM) landing near the Surveyor III spacecraft at the end of 1969 has remained the primary experimental verification of the predicted physics of plume ejecta effects from a rocket engine interacting with the surface of the moon. This was made possible by the return of the Surveyor III camera housing by the Apollo 12 astronauts, allowing detailed analysis of the composition of dust deposited by the LM plume. It was soon realized after the initial analysis of the camera housing that the LM plume tended to remove more dust than it had deposited. In the present study, coupons from the camera housing have been reexamined. In addition, plume effects recorded in landing videos from each Apollo mission have been studied for possible clues.

  13. Analysis of excess reactivity of JOYO MK-III performance test core

    International Nuclear Information System (INIS)

    Maeda, Shigetaka; Yokoyama, Kenji

    2003-10-01

    JOYO is currently being upgraded to the high performance irradiation bed JOYO MK-III core'. The MK-III core is divided into two fuel regions with different plutonium contents. To obtain a higher neutron flux, the active core height was reduced from 55 cm to 50 cm. The reflector subassemblies were replaced by shielding subassemblies in the outer two rows. Twenty of the MK-III outer core fuel subassemblies in the performance test core were partially burned in the transition core. Four irradiation test rigs, which do not contain any fuel material, were loaded in the center of the performance test core. In order to evaluate the excess reactivity of MK-III performance test core accurately, we evaluated it by applying not only the JOYO MK-II core management code system MAGI, but also the MK-III core management code system HESTIA, the JUPITER standard analysis method and the Monte Carlo method with JFS-3-J3.2R content set. The excess reactivity evaluations obtained by the JUPITER standard analysis method were corrected to results based on transport theory with zero mesh-size in space and angle. A bias factor based on the MK-II 35th core, which sensitivity was similar to MK-III performance test core's, was also applied, except in the case where an adjusted nuclear cross-section library was used. Exact three-dimensional, pin-by-pin geometry and continuous-energy cross sections were used in the Monte Carlo calculation. The estimated error components associated with cross-sections, methods correction factors and the bias factor were combined based on Takeda's theory. Those independently calculated values agree well and range from 2.8 to 3.4%Δk/kk'. The calculation result of the MK-III core management code system HESTLA was 3.13% Δk/kk'. The estimated errors for bias method range from 0.1 to 0.2%Δk/kk'. The error in the case using adjusted cross-section was 0.3%Δk/kk'. (author)

  14. Analysis of gas-liquid metal two-phase flows using a reactor safety analysis code SIMMER-III

    International Nuclear Information System (INIS)

    Suzuki, Tohru; Tobita, Yoshiharu; Kondo, Satoru; Saito, Yasushi; Mishima, Kaichiro

    2003-01-01

    SIMMER-III, a safety analysis code for liquid-metal fast reactors (LMFRs), includes a momentum exchange model based on conventional correlations for ordinary gas-liquid flows, such as an air-water system. From the viewpoint of safety evaluation of core disruptive accidents (CDAs) in LMFRs, we need to confirm that the code can predict the two-phase flow behaviors with high liquid-to-gas density ratios formed during a CDA. In the present study, the momentum exchange model of SIMMER-III was assessed and improved using experimental data of two-phase flows containing liquid metal, on which fundamental information, such as bubble shapes, void fractions and velocity fields, has been lacking. It was found that the original SIMMER-III can suitably represent high liquid-to-gas density ratio flows including ellipsoidal bubbles as seen in lower gas fluxes. In addition, the employment of Kataoka-Ishii's correlation has improved the accuracy of SIMMER-III for gas-liquid metal flows with cap-shape bubbles as identified in higher gas fluxes. Moreover, a new procedure, in which an appropriate drag coefficient can be automatically selected according to bubble shape, was developed. Through this work, the reliability and the precision of SIMMER-III have been much raised with regard to bubbly flows for various liquid-to-gas density ratios

  15. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    Science.gov (United States)

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  16. Factors Associated with Young Adults’ Pregnancy Likelihood

    Science.gov (United States)

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  17. From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline [version 2; referees: 5 approved

    Directory of Open Access Journals (Sweden)

    Yunshun Chen

    2016-08-01

    Full Text Available In recent years, RNA sequencing (RNA-seq has become a very widely used technology for profiling gene expression. One of the most common aims of RNA-seq profiling is to identify genes or molecular pathways that are differentially expressed (DE between two or more biological conditions. This article demonstrates a computational workflow for the detection of DE genes and pathways from RNA-seq data by providing a complete analysis of an RNA-seq experiment profiling epithelial cell subsets in the mouse mammary gland. The workflow uses R software packages from the open-source Bioconductor project and covers all steps of the analysis pipeline, including alignment of read sequences, data exploration, differential expression analysis, visualization and pathway analysis. Read alignment and count quantification is conducted using the Rsubread package and the statistical analyses are performed using the edgeR package. The differential expression analysis uses the quasi-likelihood functionality of edgeR.

  18. Attitude towards, and likelihood of, complaining in the banking ...

    African Journals Online (AJOL)

    aims to determine customers' attitudes towards complaining as well as their likelihood of voicing a .... is particularly powerful and impacts greatly on customer satisfaction and retention. ...... 'Cross-national analysis of hotel customers' attitudes ...

  19. Three-dimensional heat transfer analysis of the Doublet III beamline calorimeter

    International Nuclear Information System (INIS)

    Kamperschroer, J.H.; Pipkins, J.F.

    1979-10-01

    A general three-dimensional analysis has been formulated to study the flow of heat in a neutral beam calorimeter. The boundary value problem with an arbitrary incident heat flux has been solved using Fourier analysis and Laplace transform techniques. A general solution has been obtained and subsequently studied using numerical techniques as applied to the particular geometry and incident heat flux conditions of the Doublet III injection system. Negligible errors result in unfolding the incident heat flux through the use of thermocouples located near the rear surface, if data taking is initiated at the proper time and proceeds at a sufficiently rapid rate

  20. Determination of As(III and As(V in waters by chronopotentiometric stripping analysis

    Directory of Open Access Journals (Sweden)

    Švarc-Gajić Jaroslava V.

    2006-01-01

    Full Text Available Arsenic is a naturally occurring toxic and carcinogenic element. The degree of the toxicity depends on its chemical form and the concentration. Application of a sensitive, selective, simple and rapid method for detection and monitoring of different oxidation states of arsenic in waters is of great importance because main route of population exposure is through drinking water. In this work chronopotentiometric stripping analysis (CSA was used for the determination of As(III and As(V in tap, well, river and rain waters from Vojvodina (Serbia. Gold film electrode on the glassy carbon support was used as the working electrode. The experimental parameters of the technique were investigated and optimized. Detection limit of the method for the electrolysis time of 600 s was 2 μg/dm3 of As(III.

  1. Safety systems and safety analysis of the Qinshan phase III CANDU nuclear power plant

    International Nuclear Information System (INIS)

    Cai Jianping; Shen Sen; Barkman, N.

    1999-01-01

    The author introduces the Canadian nuclear reactor safety philosophy and the Qinshan Phase III CANDU NPP safety systems and safety analysis, which are designed and performed according to this philosophy. The concept of 'defence-in-depth' is a key element of the Canadian nuclear reactor safety philosophy. The design concepts of redundancy, diversity, separation, equipment qualification, quality assurance, and use of appropriate design codes and standards are adopted in the design. Four special safety systems as well as a set of reliable safety support systems are incorporated in the design of Qinshan phase III CANDU for accident mitigation. The assessment results for safety systems performance show that the fundamental safety criteria for public dose, and integrity of fuel, channels and the reactor building, are satisfied

  2. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  4. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  5. Maximum likelihood of phylogenetic networks.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2006-11-01

    Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

  6. Development and Performance of Detectors for the Cryogenic Dark Matter Search Experiment with an Increased Sensitivity Based on a Maximum Likelihood Analysis of Beta Contamination

    Energy Technology Data Exchange (ETDEWEB)

    Driscoll, Donald D [Case Western Reserve Univ., Cleveland, OH (United States)

    2004-05-01

    of a beta-eliminating cut based on a maximum-likelihood characterization described above.

  7. Maximum likelihood analysis of bioassay data from long-term follow-up of two refractory PuO2 inhalation cases.

    Science.gov (United States)

    Avtandilashvili, Maia; Brey, Richard; James, Anthony C

    2012-07-01

    The U.S. Transuranium and Uranium Registries' tissue donors 0202 and 0407 are the two most highly exposed of the 18 registrants who were involved in the 1965 plutonium fire accident at a defense nuclear facility. Material released during the fire was well characterized as "high fired" refractory plutonium dioxide with 0.32-μm mass median diameter. The extensive bioassay data from long-term follow-up of these two cases were used to evaluate the applicability of the Human Respiratory Tract Model presented by International Commission on Radiological Protection in Publication 66 and its revision proposed by Gregoratto et al. in order to account for the observed long-term retention of insoluble material in the lungs. The maximum likelihood method was used to calculate the point estimates of intake and tissue doses and to examine the effect of different lung clearance, blood absorption, and systemic models on the goodness-of-fit and estimated dose values. With appropriate adjustments, Gregoratto et al. particle transport model coupled with the customized blood absorption parameters yielded a credible fit to the bioassay data for both cases and predicted the Case 0202 liver and skeletal activities measured postmortem. PuO2 particles produced by the plutonium fire are extremely insoluble. About 1% of this material is absorbed from the respiratory tract relatively rapidly, at a rate of about 1 to 2 d (half-time about 8 to 16 h). The remainder (99%) is absorbed extremely slowly, at a rate of about 5 × 10(-6) d (half-time about 400 y). When considering this situation, it appears that doses to other body organs are negligible in comparison to those to tissues of the respiratory tract. About 96% of the total committed weighted dose equivalent is contributed by the lungs. Doses absorbed by these workers' lungs were high: 3.2 Gy to AI and 6.5 Gy to LNTH for Case 0202 (18 y post-intake) and 3.2 Gy to AI and 55.5 Gy to LNTH for Case 0407 (43 y post-intake). This evaluation

  8. Comparative analysis of SLB for OPR1000 by using MEDUSA and CESEC-III codes

    International Nuclear Information System (INIS)

    Park, Jong Cheol; Park, Chan Eok; Kim, Shin Whan

    2005-01-01

    The MEDUSA is a system thermal hydraulics code developed by Korea Power Engineering Company (KOPEC) for Non-LOCA and LOCA analysis, using two fluid, three-field governing equations for two phase flow. The detailed descriptions for the MEDUSA code are given in Reference. A lot of effort is now being made to investigate the applicability of the MEDUSA code especially to Non-LOCA analysis, by comparing the analysis results with those from the current licensing code, CESEC-III: The comparative simulations of Pressurizer Level Control System(PLCS) Malfunction and Feedwater Line Break(FLB), which have been accomplished by C.E.Park and M.T.Oh, respectively, already showed that the MEDUSA code is applicable to the analysis of Non-LOCA events. In this paper, detailed thermal hydraulic analyses for Steam Line Break(SLB) without loss of off-site power were performed using the MEDUSA code. The calculation results were also compared with the CESEC-III, 1000(OPR1000), for the purpose of the code verification

  9. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    Science.gov (United States)

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  10. Clinicopathological analysis of 91 cases of uterine cervical cancer (including 38 cases of CIN III)

    International Nuclear Information System (INIS)

    Obata, Naoko; Kamiya, Norio; Goto, Setsuko; Takahashi, Satoru

    2000-01-01

    A total of 91 cases of uterine cervical cancer, consisting of 38 cases of carcinoma in situ (CIN III) and 53 cases of stage I-IV cervical cancer, were retrospectively and clinicopathologically analyzed. The standard treatment given to these patients consisted of hysterectomy or conization for CIN III; observation of cases of mild to moderate dysplasia; radical hysterectomy plus pelvic lymph node dissection for stage I and II cervical cancer; and radiotherapy for stage III and IV cervical cancer. Postoperative irradiation consisted of irradiation of the whole pelvis with 40-50 Gy. The patients who were not treated surgically underwent 40 Gy external irradiation of the whole pelvis, followed by an additional 20 Gy with shielding and internal irradiation with an RALS. When lymph node metastasis was present, the nodes were irradiated with 40-50 Gy. The mean age of the 38 patients with CIN III was 45.2 years old, and they were para 0-4. In 24 (63.2%) of them the cancer was detected by cytodiagnosis as part of screening. Radical hysterectomy, simple hysterectomy, and conization were performed in 25 patients, 7 patients, and 6 patients, respectively. No recurrences have been detected, and the survival rate is 100%. The mean age of the 53 patients with cervical cancer stage I-IV was 62.4 years old, and they were para 0-10. There were 25 patients with stage I disease, 15 patients with stage II disease, 6 patients with stage III, and 7 patients with stage IV, and their 5-year survival rate was 82.4%, 68.8%, 66.7%, and 42.9%, respectively. Radioenteritis and radiocystitis occurred as adverse radiation effects. Pathologic factors influencing lymph node metastasis were examined by a multivariate analysis based on the data from 25 patients with stage I and II who underwent hysterectomy. The results of the analysis indicated the importance of screening and the choice of appropriate surgical method/technique, as well as the need for further investigation to determine the effective

  11. Market Analysis and Consumer Impacts Source Document. Part III. Consumer Behavior and Attitudes Toward Fuel Efficient Vehicles

    Science.gov (United States)

    1980-12-01

    This source document on motor vehicle market analysis and consumer impacts consists of three parts. Part III consists of studies and reviews on: consumer awareness of fuel efficiency issues; consumer acceptance of fuel efficient vehicles; car size ch...

  12. BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

    1981-06-01

    This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given

  13. Synthesis, analysis and radiolysis of the cobalt III 8 hydroxyquinolinate complex

    International Nuclear Information System (INIS)

    Mestnik, S.A.C.; Silva, C.P.G. da.

    1981-11-01

    The cobalt III 8-hidroxyquinolinate complex was syntetized from a solution of cobalt II. The compound was analysed by IR absorption spectroscopy, elemental analysis and by the determination of number of ligands. The radiolytic degradation was verified by spectrophotometry after submitting samples of 10 - 3 M complex in ethanolic solution to different doses of gamma radiation from a 60 Co source. The change of maximum absorbance of the complex with different doses of gamma radiation and its UV-VIS absorption spectra are presented. The complex in the solid state was also irradiated with 6,9 Mrad of gamma radiation but it didn't present degradation. (Author) [pt

  14. BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

    1981-06-01

    This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.

  15. T-scan III system diagnostic tool for digital occlusal analysis in orthodontics - a modern approach.

    Science.gov (United States)

    Trpevska, Vesna; Kovacevska, Gordana; Benedeti, Alberto; Jordanov, Bozidar

    2014-01-01

    This systematic literature review was performed to establish the mechanism, methodology, characteristics, clinical application and opportunities of the T-Scan III System as a diagnostic tool for digital occlusal analysis in different fields of dentistry, precisely in orthodontics. Searching of electronic databases, using MEDLINE and PubMed, hand searching of relevant key journals, and screening of reference lists of included studies with no language restriction was performed. Publications providing statistically examined data were included for systematic review. Twenty potentially relevant Randomized Controlled Trials (RCTs) were identified. Only ten met the inclusion criteria. The literature demonstrates that using digital occlusal analysis with T-Scan III System in orthodontics has significant advantage with regard to the capability of measuring occlusal parameters in static positions and during dynamic of the mandible. Within the scope of this systematic review, there is evidence to support that T-Scan system is rapid and accurate in identifying the distribution of the tooth contacts and it shows great promise as a clinical diagnostic screening device for occlusion and for improving the occlusion after various dental treatments. Additional clinical studies are required to advance the indication filed of this system. Importance of using digital occlusal T-Scan analysis in orthodontics deserves further investigation.

  16. Post-test analysis of ROSA-III experiment Run 702

    International Nuclear Information System (INIS)

    Koizumi, Yasuo; Kikuchi, Osamu; Soda, Kunihisa

    1980-01-01

    The purpose of the ROSA-III experiment with a scaled BWR test facility is to examine primary coolant thermal-hydraulic behavior and performance of ECCS during a posturated loss-of-coolant accident of BWR. The results provide information for verification and improvement of reactor safety analysis codes. Run 702 assumed a 200% split break at the recirculation pump suction line under an average core power without ECCS activation. Post - test analysis of the Run 702 experiment was made with computer code RELAP4J. Agreement of the calculated system pressure and the experiment one was good. However, the calculated heater surface temperatures were higher than the measured ones. Also, the axial temperature distribution was different in tendency from the experimental one. From these results, the necessity was indicated of improving the analytical model of void distribution in the core and the nodalization in the pressure vassel, in order to make the analysis more realistic. And also, the need of characteristic test was indicated for ROSA-III test facility components, such as jet pump and piping form loss coefficient; likewise, flow rate measurements must be increased and refined. (author)

  17. Strain distribution and defect analysis in III-nitrides by dynamical AFM analysis

    International Nuclear Information System (INIS)

    Minj, Albert; Cavalcoli, Daniela; Cavallini, Anna; Gamarra, Piero; Di Forte Poisson, Marie-Antoinette

    2013-01-01

    Here, we report on significant material information provided by semi-contact phase-images in a wide range of hard III-nitride surfaces. We show that the phase contrast, which is fundamentally related to the energy dissipation during tip–surface interaction, is sensitive to the crystalline nature of the material and thus could potentially be used to determine the crystalline quality of thin nitride layers. Besides, we found that the structural defects, especially threading dislocations and cracks, act as selective sites where energy mainly dissipates. Consequently, in nitrides defects with very low dimensions can actually be imaged with phase-contrast imaging. (paper)

  18. The approach to analysis of significance of flaws in ASME section III and section XI

    International Nuclear Information System (INIS)

    Cowan, A.

    1979-01-01

    ASME III Appendix G and ASME XI Appendix A describe linear elastic fracture mechanics methods to assess the significance of defects in thick-walled pressure vessels for nuclear reactor systems. The assessment of fracture toughness, Ksub(Ic), is based upon recommendations made by a Task Group of the USA Pressure Vessel Research Committee and is dependent upon correlations with drop weight and Charpy V-notch data to give a lower bound of fracture toughness Ksub(IR). The methods used in the ASME Appendices are outlined noting that, whereas ASME III Appendix G defines a procedure for obtaining allowable pressure vessel loadings for normal service in the presence of a defect, ASME XI Appendix A defines methods for assessing the significance of defects (found by volumetric inspection) under normal and emergency and faulted conditions. The methods of analysis are discussed with respect to material properties, flaw characterisation, stress analysis and recommended safety factors; a short discussion is given on the applicability of the data and methods to other materials and non-nuclear structures. (author)

  19. Components of soft tissue deformations in subjects with untreated angle's Class III malocclusions: thin-plate spline analysis.

    Science.gov (United States)

    Singh, G D; McNamara, J A; Lozanoff, S

    1998-01-01

    While the dynamics of maxillo-mandibular allometry associated with treatment modalities available for the management of Class III malocclusions currently are under investigation, developmental aberration of the soft tissues in untreated Class III malocclusions requires specification. In this study, lateral cephalographs of 124 prepubertal European-American children (71 with untreated Class III malocclusion; 53 with Class I occlusion) were traced, and 12 soft-tissue landmarks digitized. Resultant geometries were scaled to an equivalent size and mean Class III and Class I configurations compared. Procrustes analysis established statistical difference (P thin-plate spline (TPS) analysis indicated that both affine and non-affine transformations contribute towards the deformation (total spline) of the averaged Class III soft tissue configuration. For non-affine transformations, partial warp 8 had the highest magnitude, indicating large-scale deformations visualized as a combination of columellar retrusion and lower labial protrusion. In addition, partial warp 5 also had a high magnitude, demonstrating upper labial vertical compression with antero-inferior elongation of the lower labio-mental soft tissue complex. Thus, children with Class III malocclusions demonstrate antero-posterior and vertical deformations of the maxillary soft tissue complex in combination with antero-inferior mandibular soft tissue elongation. This pattern of deformations may represent gene-environment interactions, resulting in Class III malocclusions with characteristic phenotypes, that are amenable to orthodontic and dentofacial orthopedic manipulations.

  20. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  2. Revision and extension to the analysis of the third spectrum of bromine: Br III

    Science.gov (United States)

    Jabeen, S.; Tauheed, A.

    2015-03-01

    The spectrum of doubly ionized bromine (Br III) has been investigated in the vacuum ultraviolet wavelength region. Br2+ is an As-like ion with ground configuration of 4s24p3, thus a 3-electron system possessing a complex structure. The theoretical prediction was made using Cowan's quasi-relativistic Hartree-Fock code with superposition of configurations involving the 4s4p4, 4s24p2 (4d+5d+6d+5s+6s+7s), 4s4p3 (5p+4f), 4p4(4d+5s), 4s24p5s5p, 4s4p2 (4d2+5s2), 4s4p24f2 configurations for the even parity matrix and the 4s24p3, 4s24p2 (5p+6p+4f+5f) configurations for the odd parity matrix. Several previously reported levels of Br III have been revised, and new configurations have been added to the analysis. The spectrum used for this work was recorded on a 3-m normal incidence spectrograph in the wavelength region of 400-1326 Å using a triggered vacuum spark source. One hundred and two energy levels belonging to the 4s24p3, 4s4p4, 4s24p2 (4d+5d+6d+5s+6s +7s) configurations have been established, eighty-six being new. Two hundred and seventy-eight lines have been identified in this spectrum. The accuracy of our wavelength measurements for sharp and unblended lines is ±0.006 Å. The ionization potential of Br III was found to be 281,250±100 cm-1 (34.870±0.012 eV).

  3. Revision and extension to the analysis of the third spectrum of tellurium: Te III

    International Nuclear Information System (INIS)

    Tauheed, A.; Naz, A.

    2011-01-01

    The spectrum of doubly ionized tellurium atom (Te III) has been investigated in the vacuum ultraviolet wavelength region. The ground configuration of Te III is 5s 2 5p 2 and the excited configurations are of the type 5s 2 5p nl. The core excitation leads to a 5s5p 3 configuration. Cowan's multi-configuration interaction code was utilized to predict the ion structure. The observed spectrum of tellurium was recorded on a 3-m normal incidence vacuum spectrograph of Antigonish Laboratory (Canada) in the wavelength region of 300 - 2000 A by using a triggered spark light source for the excitation of the spectrum. The 5s 2 5p 2 - [ 5s 2 5p (5d + 6d + 7d + 6s + 7s + 8s) + 5s5p 3 ] transition array has been analyzed. Previously reported levels by Joshi et al have been confirmed while the older analysis by Crooker and Joshi has been revised and extended to include the 5s 2 5p (5d, 6d, 7d, 6s,7s, 8s) and 5s5p 3 configurations. Least-squares- fitted parametric calculations were used to interpret the final results. One hundred and fifty spectral lines have been identified to establish 60 energy levels. Our wavelength accuracy for unblended and sharp lines is better than ±0.005 A. The ionization potential of Te III was found to be 224550 ± 300 cm -1 (27.841 ± 0.037eV).

  4. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  5. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan; Genton, Marc G.

    2014-01-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  6. Safety analysis of RA reactor operation, I-III, Part III - Environmental effect of the maximum credible accident

    International Nuclear Information System (INIS)

    Raisic, N.

    1963-02-01

    Maximum credible accident at the RA reactor would consider release of fission products into the environment. This would result from fuel elements failure or meltdown due to loss of coolant. The analysis presented in this report assumes that the reactor was operating at nominal power at the moment of maximum possible accident. The report includes calculations of fission products activity at the moment of accident, total activity release during the accident, concentration of radioactive material in the air in the reactor neighbourhood, and the analysis of accident environmental effects

  7. Revision and extension to the analysis of the third spectrum of bromine: Br III

    International Nuclear Information System (INIS)

    Jabeen, S.; Tauheed, A.

    2015-01-01

    The spectrum of doubly ionized bromine (Br III) has been investigated in the vacuum ultraviolet wavelength region. Br 2+ is an As-like ion with ground configuration of 4s 2 4p 3 , thus a 3-electron system possessing a complex structure. The theoretical prediction was made using Cowan's quasi-relativistic Hartree–Fock code with superposition of configurations involving the 4s4p 4 , 4s 2 4p 2 (4d+5d+6d+5s+6s+7s), 4s4p 3 (5p+4f), 4p 4 (4d+5s), 4s 2 4p5s5p, 4s4p 2 (4d 2 +5s 2 ), 4s4p 2 4f 2 configurations for the even parity matrix and the 4s 2 4p 3 , 4s 2 4p 2 (5p+6p+4f+5f) configurations for the odd parity matrix. Several previously reported levels of Br III have been revised, and new configurations have been added to the analysis. The spectrum used for this work was recorded on a 3-m normal incidence spectrograph in the wavelength region of 400–1326 Å using a triggered vacuum spark source. One hundred and two energy levels belonging to the 4s 2 4p 3 , 4s4p 4 , 4s 2 4p 2 (4d+5d+6d+5s+6s +7s) configurations have been established, eighty-six being new. Two hundred and seventy-eight lines have been identified in this spectrum. The accuracy of our wavelength measurements for sharp and unblended lines is ±0.006 Å. The ionization potential of Br III was found to be 281,250±100 cm −1 (34.870±0.012 eV). - Highlights: • The spectrum of Br was recorded on a 3-m grating spectrograph with a triggered spark source. • Most of the known energy levels have been revised and further new configurations have been added. • Superposition-of-configurations calculations with relativistic corrections were made for theoretical predictions. • Radiative weighted oscillator strength (gf) & radiative transition probabilities (gA) were calculated. • Ionization Potential of Br III was determined experimentally

  8. Cost-utility analysis of adjuvant chemotherapy in patients with stage III colon cancer in Thailand.

    Science.gov (United States)

    Lerdkiattikorn, Panattharin; Chaikledkaew, Usa; Lausoontornsiri, Wirote; Chindavijak, Somjin; Khuhaprema, Thirawud; Tantai, Narisa; Teerawattananon, Yot

    2015-01-01

    In Thailand, there has been no economic evaluation study of adjuvant chemotherapy for stage III colon cancer patients after resection. This study aims to evaluate the cost-utility of all chemotherapy regimens currently used in Thailand compared with the adjuvant 5-fluorouracil/leucovorin (5-FU/LV) plus capecitabine as the first-line therapy for metastatic disease in patients with stage III colon cancer after resection. A cost-utility analysis was performed to estimate the relevant lifetime costs and health outcomes of chemotherapy regimens based on a societal perspective using a Markov model. The results suggested that the adjuvant 5-FU/LV plus capecitabine as the first-line therapy for metastatic disease would be the most cost-effective chemotherapy. The adjuvant FOLFOX and FOLFIRI as the first-line treatment for metastatic disease would be cost-effective with an incremental cost-effectiveness ratio of 299,365 Thai baht per QALY gained based on a societal perspective if both prices of FOLFOX and FOLFIRI were decreased by 40%.

  9. FEMAXI-III. An axisymmetric finite element computer code for the analysis of fuel rod performance

    International Nuclear Information System (INIS)

    Ichikawa, M.; Nakajima, T.; Okubo, T.; Iwano, Y.; Ito, K.; Kashima, K.; Saito, H.

    1980-01-01

    For the analysis of local deformation of fuel rods, which is closely related to PCI failure in LWR, FEMAXI-III has been developed as an improved version based on the essential models of FEMAXI-II, MIPAC, and FEAST codes. The major features of FEMAXI-III are as follows: Elasto-plasticity, creep, pellet cracking, relocation, densification, hot pressing, swelling, fission gas release, and their interrelated effects are considered. Contact conditions between pellet and cladding are exactly treated, where sliding or sticking is defined by iterations. Special emphasis is placed on creep and pellet cracking. In the former, an implicit algorithm is applied to improve numerical stability. In the latter, the pellet is assumed to be non-tension material. The recovery of pellet stiffness under compression is related to initial relocation. Quadratic isoparametric elements are used. The skyline method is applied to solve linear stiffness equation to reduce required core memories. The basic performance of the code has been proven to be satisfactory. (author)

  10. From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Yunshun Chen

    2016-06-01

    Full Text Available In recent years, RNA sequencing (RNA-seq has become a very widely used technology for profiling gene expression. One of the most common aims of RNA-seq profiling is to identify genes or molecular pathways that are differentially expressed (DE between two or more biological conditions. This article demonstrates a computational workflow for the detection of DE genes and pathways from RNA-seq data by providing a complete analysis of an RNA-seq experiment profiling epithelial cell subsets in the mouse mammary gland. The workflow uses R software packages from the open-source Bioconductor project and covers all steps of the analysis pipeline, including alignment of read sequences, data exploration, differential expression analysis, visualization and pathway analysis. Read alignment and count quantification is conducted using the Rsubread package and the statistical analyses are performed using the edgeR package. The differential expression analysis uses the quasi-likelihood functionality of edgeR.

  11. Removal of Cr(III ions from salt solution by nanofiltration: experimental and modelling analysis

    Directory of Open Access Journals (Sweden)

    Kowalik-Klimczak Anna

    2016-09-01

    Full Text Available The aim of this study was experimental and modelling analysis of the nanofiltration process used for the removal of chromium(III ions from salt solution characterized by low pH. The experimental results were interpreted with Donnan and Steric Partitioning Pore (DSP model based on the extended Nernst-Planck equation. In this model, one of the main parameters, describing retention of ions by the membrane, is pore dielectric constant. In this work, it was identified for various process pressures and feed compositions. The obtained results showed the satisfactory agreement between the experimental and modelling data. It means that the DSP model may be helpful for the monitoring of nanofiltration process applied for treatment of chromium tannery wastewater.

  12. ZE3RA: the ZEPLIN-III Reduction and Analysis package

    International Nuclear Information System (INIS)

    Neves, F; Chepel, V; DeViveiros, L; Lindote, A; Lopes, M I; Akimov, D Yu; Belov, V A; Burenkov, A A; Kobyakin, A S; Kovalenko, A G; Araújo, H M; Currie, A; Horn, M; Lebedenko, V N; Barnes, E J; Ghag, C; Hollingsworth, A; Edwards, B; Kalmus, G E; Lüscher, R

    2011-01-01

    ZE3RA is the software package responsible for processing the raw data from the ZEPLIN-III dark matter experiment and its reduction into a set of parameters used in all subsequent analyses. The detector is a liquid xenon time projection chamber with scintillation and electroluminescence signals read out by an array of 31 photomultipliers. The dual range 62-channel data stream is optimised for the detection of scintillation pulses down to a single photoelectron and of ionisation signals as small as those produced by single electrons. We discuss in particular several strategies related to data filtering, pulse finding and pulse clustering which are tuned using calibration data to recover the best electron/nuclear recoil discrimination near the detection threshold, where most dark matter elastic scattering signatures are expected. The software was designed assuming only minimal knowledge of the physics underlying the detection principle, allowing an unbiased analysis of the experimental results and easy extension to other detectors with similar requirements.

  13. Integrated safety analysis of rolapitant with coadministered drugs from phase II/III trials

    DEFF Research Database (Denmark)

    Barbour, S; Smit, T.; Wang, X

    2017-01-01

    adverse events by use versus non-use of drug substrates of CYP2D6 or BCRP. Patients and methods: Patients were randomized to receive either 180 mg oral rolapitant or placebo approximately 1-2 hours before chemotherapy in combination with a 5-hydroxytryptamine type 3 RA and dexamethasone. Data...... cytochrome P450 (CYP) 3A4, but it does inhibit CYP2D6 and breast cancer resistance protein (BCRP). To analyze potential drug-drug interactions between rolapitant and concomitant medications, this integrated safety analysis of four double-blind, randomized phase II or III studies of rolapitant examined...... for treatment-emergent adverse events (TEAEs) and treatment-emergent serious adverse events (TESAEs) during cycle 1 were pooled across the four studies and summarized in the overall population and by concomitant use/non-use of CYP2D6 or BCRP substrate drugs. Results: In the integrated safety population, 828...

  14. An Analysis of Sawtooth Noise in the Timing SynPaQ III GPS Sensor

    Directory of Open Access Journals (Sweden)

    Yuriy S. SHMALIY

    2007-05-01

    Full Text Available This paper addresses a probabilistic analysis of sawtooth noise in the one pulse per second (1PPS output of the timing SynPaQ III GPS Sensor. We show that sawtooth noise is uniformly distributed within the bounds caused by period of the Local Time Clock of the sensor and that the probability density function (pdf of this noise is formed with 1ns sampling interval used in the sensor to calculate the negative sawtooth. We also show that the pdf has at zero a spike of 1ns width caused by roll-off. It is demonstrated that an unbiased finite impulse response filter is an excellent suppresser of such a noise in the estimates of the time interval errors of local clocks.

  15. TREAT MK III Loop Thermoelastoplastic Stress Analysis for the L03 Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, James M.

    1981-03-01

    The STRAW code was used to analyze the static response of a TREAT MK III loop subjected to thermal and mechanical loadings arising from an accident situation for the purpose of determining the defiections and stresses. This analysis provides safety support for the L03 reactivity accident study. The analysis was subdivided into two tasks: (1) an analysis of a flow blockage accident (Cases A and B), where all the energy is assumed deposited in the test leg, resulting in a temperature increase from 530°F to 1720°F, with a small internal pressure throughout the loop and (2) an analysis of a second flow blockage accident (Cases C and D), where again, all the energy is assumed to he deposited in the test leg, resulting in a temperature rise from 530°F to 1845°F, with a small internal pressure throughout the loop. The purpose of these two tasks was to determine if loop failure can occur with the thermal differential across the pump and test legs. Also of interest is whether an undesirable amount of loop lateral deflection will be caused by the thermal differential. A two dimensional analysis of the TREAT MK III loop was performed. The analysis accounted for material nonlinearities, both as a function of temperature and stress, and geometric nonlinearities arising from large deflections. Straight beam elements with annular cross sections were used to model the loop. The analyses show that the maximum strains are less than 21% of their failure strains for all subcases of Cases A and B. For all subcases of cases C and D, the maximum strains are less than 53% of their failure strains. The failure strain is 27.9% for the material at 530°F, 38.1% at 1720°F and 17.8% at 1845°F. Large lateral deflections are observed when the loop is not constrained except at its clamped support--as much as 8.6 inches. However, by accounting for the constraint of the concrete biological shield, the maximum lateral deflection was reduced to less than 0.05 inches at the points of concern.

  16. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  17. FEMAXI-III, a computer code for fuel rod performance analysis

    International Nuclear Information System (INIS)

    Ito, K.; Iwano, Y.; Ichikawa, M.; Okubo, T.

    1983-01-01

    This paper presents a method of fuel rod thermal-mechanical performance analysis used in the FEMAXI-III code. The code incorporates the models describing thermal-mechanical processes such as pellet-cladding thermal expansion, pellet irradiation swelling, densification, relocation and fission gas release as they affect pellet-cladding gap thermal conductance. The code performs the thermal behavior analysis of a full-length fuel rod within the framework of one-dimensional multi-zone modeling. The mechanical effects including ridge deformation is rigorously analyzed by applying the axisymmetric finite element method. The finite element geometrical model is confined to a half-pellet-height region with the assumption that pellet-pellet interaction is symmetrical. The 8-node quadratic isoparametric ring elements are adopted for obtaining accurate finite element solutions. The Newton-Raphson iteration with an implicit algorithm is applied to perform the analysis of non-linear material behaviors accurately and stably. The pellet-cladding interaction mechanism is exactly treated using the nodal continuity conditions. The code is applicable to the thermal-mechanical analysis of water reactor fuel rods experiencing variable power histories. (orig.)

  18. DFT calculations, spectroscopic, thermal analysis and biological activity of Sm(III) and Tb(III) complexes with 2-aminobenzoic and 2-amino-5-chloro-benzoic acids

    Science.gov (United States)

    Essawy, Amr A.; Afifi, Manal A.; Moustafa, H.; El-Medani, S. M.

    2014-10-01

    The complexes of Sm(III) and Tb(III) with 2-aminobenzoic acid (anthranilic acid, AA) and 2-amino-5-chlorobenzoic acid (5-chloroanthranilic acid, AACl) were synthesized and characterized based on elemental analysis, IR and mass spectroscopy. The data are in accordance with 1:3 [Metal]:[Ligand] ratio. On the basis of the IR analysis, it was found that the metals were coordinated to bidentate anthranilic acid via the ionised oxygen of the carboxylate group and to the nitrogen of amino group. While in 5-chloroanthranilic acid, the metals were coordinated oxidatively to the bidentate carboxylate group without bonding to amino group; accordingly, a chlorine-affected coordination and reactivity-diversity was emphasized. Thermal analyses (TGA) and biological activity of the complexes were also investigated. Density Functional Theory (DFT) calculations at the B3LYP/6-311++G (d,p)_ level of theory have been carried out to investigate the equilibrium geometry of the ligand. The optimized geometry parameters of the complexes were evaluated using SDDALL basis set. Moreover, total energy, energy of HOMO and LUMO and Mullikan atomic charges were calculated. In addition, dipole moment and orientation have been performed and discussed.

  19. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  20. Development and Implementation of Efficiency-Improving Analysis Methods for the SAGE III on ISS Thermal Model Originating

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; Scola, Salvatore; Tobin, Steven; McLeod, Shawn; Mannu, Sergio; Guglielmo, Corrado; Moeller, Timothy

    2013-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle in 2015. A detailed thermal model of the SAGE III payload has been developed in Thermal Desktop (TD). Several novel methods have been implemented to facilitate efficient payload-level thermal analysis, including the use of a design of experiments (DOE) methodology to determine the worst-case orbits for SAGE III while on ISS, use of TD assemblies to move payloads from the Dragon trunk to the Enhanced Operational Transfer Platform (EOTP) to its final home on the Expedite the Processing of Experiments to Space Station (ExPRESS) Logistics Carrier (ELC)-4, incorporation of older models in varying unit sets, ability to change units easily (including hardcoded logic blocks), case-based logic to facilitate activating heaters and active elements for varying scenarios within a single model, incorporation of several coordinate frames to easily map to structural models with differing geometries and locations, and streamlined results processing using an Excel-based text file plotter developed in-house at LaRC. This document presents an overview of the SAGE III thermal model and describes the development and implementation of these efficiency-improving analysis methods.

  1. Proteomic and properties analysis of botanical insecticide rhodojaponin III-induced response of the diamondback moth, Plutella xyllostella (L..

    Directory of Open Access Journals (Sweden)

    Xiaolin Dong

    Full Text Available BACKGROUND: Rhodojaponin III, as a botanical insecticide, affects a wide variety of biological processes in insects, including reduction of feeding, suspension of development, and oviposition deterring of adults in a dose-dependent manner. However, the mode of these actions remains obscure. PRINCIPAL FINDINGS: In this study, a comparative proteomic approach was adopted to examine the effect of rhodojaponin III on the Plutella xyllostella (L.. Following treating 48 hours, newly emergence moths were collected and protein samples were prepared. The proteins were separated by 2-DE, and total 31 proteins were significantly affected by rhodojaponin III compared to the control identified by MALDI-TOF/TOF-MS/MS. These differentially expressed proteins act in the nervous transduction, odorant degradation and metabolic change pathways. Further, gene expression patterns in treated and untreated moths were confirmed by qRT-PCR and western blot analysis. RNAi of the chemosensory protein (PxCSP gene resulted in oviposition significantly increased on cabbage plants treated with rhodojaponin III. CONCLUSIONS: These rhodojaponin III-induced proteins and gene properties analysis would be essential for a better understanding of the potential molecular mechanism of the response to rhodojaponin III from moths of P. xylostella.

  2. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  3. Chemical analysis of simulated high level waste glasses to support stage III sulfate solubility modeling

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-17

    The U.S. Department of Energy (DOE), Office of Environmental Management (EM) is sponsoring an international, collaborative project to develop a fundamental model for sulfate solubility in nuclear waste glass. The solubility of sulfate has a significant impact on the achievable waste loading for nuclear waste forms within the DOE complex. These wastes can contain relatively high concentrations of sulfate, which has low solubility in borosilicate glass. This is a significant issue for low-activity waste (LAW) glass and is projected to have a major impact on the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Sulfate solubility has also been a limiting factor for recent high level waste (HLW) sludge processed at the Savannah River Site (SRS) Defense Waste Processing Facility (DWPF). The low solubility of sulfate in glass, along with melter and off-gas corrosion constraints, dictate that the waste be blended with lower sulfate concentration waste sources or washed to remove sulfate prior to vitrification. The development of enhanced borosilicate glass compositions with improved sulfate solubility will allow for higher waste loadings and accelerate mission completion.The objective of the current scope being pursued by SHU is to mature the sulfate solubility model to the point where it can be used to guide glass composition development for DWPF and WTP, allowing for enhanced waste loadings and waste throughput at these facilities. A series of targeted glass compositions was selected to resolve data gaps in the model and is identified as Stage III. SHU fabricated these glasses and sent samples to SRNL for chemical composition analysis. SHU will use the resulting data to enhance the sulfate solubility model and resolve any deficiencies. In this report, SRNL provides chemical analyses for the Stage III, simulated HLW glasses fabricated by SHU in support of the sulfate solubility model development.

  4. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  5. Post-test analysis of ROSA-III experiment RUNs 705 and 706

    International Nuclear Information System (INIS)

    Koizumi, Yasuo; Soda, Kunihisa; Kikuchi, Osamu; Tasaka, Kanji; Shiba, Masayoshi

    1980-07-01

    The purpose of ROSA-III experiment with a scaled BWR Test facility is to examine primary coolant thermal-hydraulic behavior and performance of ECCS during a postulated loss-of-coolant accident of BWR. The results provide the information for verification and improvement of reactor safety analysis codes. RUNs 705 and 706 assumed a 200% double-ended break at the recirculation pump suction. RUN 705 was an isothermal blowdown test without initial power and initial core flow. In RUN 706 for an average core power and no ECCS, the main steam line and feed water line were isolated immediately on the break. Post-test analysis of RUNs 705 and 706 was made with computer code RELAP4J. The agreement in system pressure between calculation and experiment was satisfactory. However, the calculated heater rod surface temperature were significantly higher than the experimental ones. The calculated axial temperature profile was different in tendency from the experimental one. The calculated mixture level behavior in the core was different from the liquid void distribution observed in experiment. The rapid rise of fuel rod surface temperature was caused by the reduction of heat transfer coefficient attributed to the increase of quality. The need was indicated for improvement of analytical model of void distribution in the core, and also to performe a characteristic test of recirculation line under reverse flow and to examine the core inlet flow rate experimentally and analytically. (author)

  6. Stability analysis for the Big Dee upgrade of the Doublet III tokamak

    International Nuclear Information System (INIS)

    Helton, F.J.; Luxon, J.L.

    1987-01-01

    Ideal magnetohydrodynamic stability analysis has been carried out for configurations expected in the Big Dee tokamak, an upgrade of the Doublet III tokamak into a non-circular cross-section device which began operation early in 1986. The results of this analysis support theoretical predictions as follows: Since the maximum value of beta stable to ballooning and Mercier modes, which we denote β c , increases with inverse aspect ratio, elongation and triangularity, the Big Dee is particularly suited to obtain high values of β c and there exist high β c Big Dee equilibria for large variations in all relevant plasma parameters. The beta limits for the Big Dee are consistent with established theory as summarized in present scaling laws. High beta Big Dee equilibria are continuously accessible when approached through changes in all relevant input parameters and are structurally stable with respect to variations of input plasma parameters. Big Dee beta limits have a smooth dependence on plasma parameters such as β p and elongation. These calculations indicate that in the actual running of the device the Big Dee high beta equilibria should be smoothly accessible. Theory predicts that the limiting plasma parameters, such as beta, total plasma current and plasma pressure, which can be obtained within the operating limits of the Big Dee are reactor relevant. Thus the Big Dee should be able to use its favourable ideal MHD scaling and controlled plasma shaping to attain reactor relevant parameters in a moderate sized device. (author)

  7. Synthesis, characterization and single crystal X-ray analysis of chlorobis(N,N-dimethyldithiocarbamato-S,S′antimony(III

    Directory of Open Access Journals (Sweden)

    H.P.S. Chauhan

    2015-07-01

    Full Text Available The title compound chlorobis(N,N-dimethyldithiocarbamato-S,S′antimony(III has been prepared in distilled acetonitrile and characterized by physicochemical [melting point and molecular weight determination, elemental analysis (C, H, N, S & Sb], spectral [FT–IR, far IR, NMR (1H & 13C] studies. The crystal and molecular structure was further confirmed using single crystal X-ray diffraction analysis which features a five-coordinate geometry for antimony(III within a ClS4 donor set. The distortion in the co-planarity of ClSbS3 evidences the stereochemical influence exerts by the lone pair of electrons on antimony(III. Two centrosymmetrically related molecule held together via C–H···Cl secondary interaction result in molecular aggregation of the compound.

  8. Combining dosimetry and toxicity: analysis of two UK phase III clinical trials

    International Nuclear Information System (INIS)

    Gulliford, Sarah L

    2014-01-01

    There are many advantages to performing a clinical trial when implementing a novel radiotherapy technique. The clinical trials framework enables the safety and efficacy of the 'experimental arm' to be tested and ensures practical support, rigorous quality control and data monitoring for participating centres. In addition to the clinical and follow-up data collected from patients within the trial, it is also possible to collect 3-D dosimetric information from the corresponding radiotherapy treatment plans. Analysing the combination of dosimetric, clinical and follow-up data enhances the understanding of the relationship between the dose delivered to both the target and normal tissue structures and reported outcomes and toxicity. Aspects of the collection, collation and analysis of data from two UK multicentre Phase III radiotherapy trials are presented here. MRC-RT01 dose-escalation prostate radiotherapy trial ISRCTN47772397 was one of the first UK multi-centre radiotherapy trials to collect 3-D dosimetric data. A number of different analysis methodologies were implemented to investigate the relationship between the dose distribution to the rectum and specific rectal toxicities. More recently data was collected from the PARSPORT trial (Parotid Sparing IMRT vs conventional head and neck radiotherapy) ISRCTN48243537. In addition to the planned analysis, dosimetric analysis was employed to investigate an unexpected finding that acute fatigue was more prevalent in the IMRT arm of the trial. It can be challenging to collect 3-D dosimetric information from multicentre radiotherapy trials. However, analysing the relationship between dosimetric and toxicity data provides invaluable information which can influence the next generation of radiotherapy techniques.

  9. RA reactor safety analysis I-III, Part III - Environmental effect of the maximum credible accident; Analiza sigurnosti rada Reaktora RA I-III, III deo - Posledica maksimalno moguceg akcidenta na okolinu reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Raisic, N [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1963-02-15

    The objective of the maximum credible accident analysis was to determine the integral radiation doses in the vicinity of the reactor and in the environment. In case of RA reactor the maximum credible accident, meaning release of the fission products, would be caused by fuel elements meltdown. This analysis includes the following calculation results: activity of the fission products, volatility of the fission products, concentration of radioactive materials in the air, analysis of the accident environmental effects.

  10. Direct spectrophotometric analysis of low level Pu (III) in Pu(IV) nitrate solution

    International Nuclear Information System (INIS)

    Mageswaran, P.; Suresh Kumar, K.; Kumar, T.; Gayen, J.K.; Shreekumar, B.; Dey, P.K.

    2010-01-01

    Among the various methods demonstrated for the conversion of plutonium nitrate to its oxide, the oxalate precipitation process either as Pu (III) or Pu (IV) oxalate gained wide acceptance. Since uranous nitrate is the most successful partitioning agent used in the PUREX process for the separation of Pu from the bulk amount of U, the Pu (III) oxalate precipitation of the purified nitrate solution will not give required decontamination from U. Hence Pu IV oxalate precipitation process is a better option to achieve the end user's specified PuO 2 product. Prior to the precipitation process, ensuring of the Pu (IV) oxidation state is essential. Hence monitoring of the level of Pu oxidation state either Pu (III) or Pu (IV) in the feed solution plays a significant role to establish complete conversion of Pu (III). The method in vogue to estimate Pu(lV) content is extractive radiometry using Theonyl Trifluoro Acetone (TTA). As the the method warrants a sample preparation with respect to acidity, a precise measurement of Pu (IV) without affecting the Pu(III) level in the feed sample is difficult. Present study is focused on the exploration of direct spectrophotometry using an optic fiber probe of path length of 40mm to monitor the low level of Pu(III) after removing the bulk Pu(lV) which interfere in the Pu(III) absorption spectrum, using TTA-TBP synergistic mixture without changing the sample acidity

  11. A Spectral-line Analysis of the G8 III Standard ε VIR

    Energy Technology Data Exchange (ETDEWEB)

    Gray, David F., E-mail: dfgray@uwo.ca [Department of Physics and Astronomy University of Western Ontario, 1151 Richmond Street, London, Ontario N6A 3K7 (Canada)

    2017-08-10

    Eleven seasons of spectroscopic data comprised of 107 exposures for the stable G8 III standard star, ε Vir are analyzed for projected rotation rate and granulation parameters. A Fourier analysis of the line shapes yield v sin i = 3.06 ± 0.20 km s{sup −1} and a radial-tangential macroturbulence dispersion ζ {sub RT} = 5.16 ± 0.08 km s{sup −1}. The radial velocity over nine seasons is constant to 18 m s{sup −1}. The absolute radial velocity with granulation blueshifts (but not gravitational redshift) removed is −14120 ± 75 m s{sup −1}. Line-depth ratios show the temperature to be constant to 0.7 K over 11 years, although a small secular rise or cyclic variation ∼1 K cannot be ruled out. The third-signature plot shows that the star has granulation velocities 10% larger than the Sun's. Mapping the Fe i λ 6253 line bisector on to the third-signature plot indicates a normal-for-giants flux deficit area of 12.8%, indicating ∼134 K temperature difference between granules and lanes. Deficit velocities of GK giants are seen to shift to higher values with higher luminosity, ∼0.75 km s{sup −1} over Δ M {sub V} ∼ 1.5, indicating larger velocity differences between granules and lanes for giants higher in the HR diagram.

  12. Extended analysis and the ionization potential of the Ni III spectrum

    International Nuclear Information System (INIS)

    Garcia-Riquelme, O.; Rico, F.R.

    1992-01-01

    The Ni III spectrum emitted from a sliding spark in He, has been recorded in the UV 1300-1800A, on the 10.7m normal incidence vacuum spectrograph (plate factor 0.77A/mm), at the NIST in Washington. The low lying configurations 3d 7 4s, 3d 7 4p have been revised, new lines classified and the transitions 4p-4d, 4p-5s analyzed. The analysis has been extended to the spectral region up to 9000A and new terms from the electronic configurations 3d 7 5d, 6s, 5p, 4f and 5g have been identified. Theoretical Slater parametric calculations for these configurations and least square fits to the experimental levels have been performed. The ionization potential has been determined to 283 800±150cm -1 or 35.19±0.02eV. Table of new levels, the list of classified lines in the analyzed ranges 1300-1800, 2200-9000A, and the (LSF) parameters for the calculated configurations, are given. (orig.)

  13. The Fear of Pain Questionnaire-III and the Fear of Pain Questionnaire-Short Form: a confirmatory factor analysis

    DEFF Research Database (Denmark)

    Vambheim, Sara M.; Lyby, Peter Solvoll; Aslaksen, Per M.

    2017-01-01

    .Aims and methods: The purpose of the study was to investigate the model fit, reliability and validity of the FPQ-III and the FPQ-SF in a Norwegian nonclinical sample, using confirmatory factor analysis (CFA). The second aim was to explore the model fit of the two scales in male and female subgroups separately...... the questionnaires, the model fit, validity and reliability were compared across sex using CFA.Results: The results revealed that both models' original factor structures had poor fit. However, the FPQ-SF had a better fit overall, compared to the FPQ-III. The model fit of the two models differed across sex...

  14. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  15. The impact of Basel III on money creation: A synthetic analysis

    OpenAIRE

    Xiong, Wanting; Wang, Yougui

    2017-01-01

    Recent evidences provoke broad rethinking of the role of banks in money creation. The authors argue that apart from the reserve requirement, prudential regulations also play important roles in constraining the money supply. Specifically, they study three Basel III regulations and theoretically analyze their standalone and collective impacts. The authors find that 1) the money multiplier under Basel III is not constant but a decreasing function of the monetary base; 2) the determinants of the ...

  16. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  17. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  18. Safety analysis of RA reactor operation, I-III, Part III - Environmental effect of the maximum credible accident; Analiza sigurnosti rada reaktora RA - I-III, III deo - Posledica maksimalno moguceg akcidenta na okolinu reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Raisic, N [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1963-02-15

    Maximum credible accident at the RA reactor would consider release of fission products into the environment. This would result from fuel elements failure or meltdown due to loss of coolant. The analysis presented in this report assumes that the reactor was operating at nominal power at the moment of maximum possible accident. The report includes calculations of fission products activity at the moment of accident, total activity release during the accident, concentration of radioactive material in the air in the reactor neighbourhood, and the analysis of accident environmental effects.

  19. A three-dimensional soft tissue analysis of Class III malocclusion: a case-controlled cross-sectional study.

    Science.gov (United States)

    Johal, Ama; Chaggar, Amrit; Zou, Li Fong

    2018-03-01

    The present study used the optical surface laser scanning technique to compare the facial features of patients aged 8-18 years presenting with Class I and Class III incisor relationship in a case-control design. Subjects with a Class III incisor relationship, aged 8-18 years, were age and gender matched with Class I control and underwent a 3-dimensional (3-D) optical surface scan of the facial soft tissues. Landmark analysis revealed Class III subjects displayed greater mean dimensions compared to the control group most notably between the ages of 8-10 and 17-18 years in both males and females, in respect of antero-posterior (P = 0.01) and vertical (P = 0.006) facial dimensions. Surface-based analysis, revealed the greatest difference in the lower facial region, followed by the mid-face, whilst the upper face remained fairly consistent. Significant detectable differences were found in the surface facial features of developing Class III subjects.

  20. Thin-plate spline analysis of mandibular morphological changes induced by early class III treatment: a long-term evaluation.

    Science.gov (United States)

    Franchi, Lorenzo; Pavoni, Chiara; Cerroni, Silvia; Cozza, Paola

    2014-08-01

    To evaluate the long-term mandibular morphological changes induced by early treatment of class III malocclusion with rapid maxillary expansion (RME) and facial mask (FM). Twenty-five subjects [10 boys, 15 girls; mean age at T1 (start of treatment) 9.3±1.6 years] with class III disharmony were treated with RME and FM therapy followed by fixed appliances. The patients were re-evaluated at the end of growth (T2), about 8.5 years after the end of the treatment (mean age, 18.6±2.0 years). Sixteen subjects with untreated class III malocclusion comprised the control group. Mandibular shape changes were analysed on the lateral cephalograms of the subjects of both groups by means of thin-plate spline (TPS) analysis. Procrustes average mandibular configurations were subjected to TPS analysis by means of both cross-sectional between-group comparisons at T1 and at T2 and longitudinal within-group comparisons. Statistical analysis of shape differences was performed using a generalized Goodall F test. In the long term, the treated group exhibited a significant upward and forward direction of condylar growth. On the contrary, untreated class III subjects showed an upward and backward direction of condylar growth associated with a downward and forward deformation of the mandibular symphysis. Limitations are related to the small sample size of both treated and control groups and to the retrospective nature of the study. Early treatment of class III malocclusion with RME and FM is able to produce significant and favourable long-term mandibular shape changes characterized by an anterior morphogenetic rotation. © The Author 2013. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  1. A prognostic analysis of 895 cases of stage III colon cancer in different colon subsites.

    Science.gov (United States)

    Zhang, Yan; Ma, Junli; Zhang, Sai; Deng, Ganlu; Wu, Xiaoling; He, Jingxuan; Pei, Haiping; Shen, Hong; Zeng, Shan

    2015-09-01

    Stage III colon cancer is currently treated as an entity with a unified therapeutic principle. The aim of the retrospective study is to explore the clinicopathological characteristics and outcomes of site-specific stage III colon cancers and the influences of tumor location on prognosis. Eight hundred ninety-five patients with stage III colon cancer treated with radical operation and subsequent adjuvant chemotherapy (5-fluorouracil/oxaliplatin) were divided into seven groups according to colon segment (cecum, ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, and sigmoid colon). Expression of excision repair cross-complementing group 1 (ERCC1) and thymidylate synthase (TS) was examined by immunohistochemistry. We assessed if differences exist in patient characteristics and clinic outcomes between the seven groups. There were significant differences in tumor differentiation (P Cancer (AJCC) tumor-node-metastasis (TNM) stage (P colon. Cox regression analyses identified that tumor location was an independent prognostic factor for RFS and OS. Stage III colon cancer located proximally carried a poorer survival than that located distally. Different efficacies of FOLFOX adjuvant chemotherapy may be an important factor affecting survival of site-specific stage III colon cancers.

  2. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  3. Rationalization of paclitaxel insensitivity of yeast β-tubulin and human βIII-tubulin isotype using principal component analysis

    Directory of Open Access Journals (Sweden)

    Das Lalita

    2012-08-01

    Full Text Available Abstract Background The chemotherapeutic agent paclitaxel arrests cell division by binding to the hetero-dimeric protein tubulin. Subtle differences in tubulin sequences, across eukaryotes and among β-tubulin isotypes, can have profound impact on paclitaxel-tubulin binding. To capture the experimentally observed paclitaxel-resistance of human βIII tubulin isotype and yeast β-tubulin, within a common theoretical framework, we have performed structural principal component analyses of β-tubulin sequences across eukaryotes. Results The paclitaxel-resistance of human βIII tubulin isotype and yeast β-tubulin uniquely mapped on to the lowest two principal components, defining the paclitaxel-binding site residues of β-tubulin. The molecular mechanisms behind paclitaxel-resistance, mediated through key residues, were identified from structural consequences of characteristic mutations that confer paclitaxel-resistance. Specifically, Ala277 in βIII isotype was shown to be crucial for paclitaxel-resistance. Conclusions The present analysis captures the origin of two apparently unrelated events, paclitaxel-insensitivity of yeast tubulin and human βIII tubulin isotype, through two common collective sequence vectors.

  4. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  5. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  6. Analysis of the microturbine combustion chamber by using the CHEMKIN III computer code; Analise da camara de combustao de microturbinas empregando-se o codigo computacional CHEMKIN III

    Energy Technology Data Exchange (ETDEWEB)

    Madela, Vinicius Zacarias; Pauliny, Luis F. de A.; Veras, Carlos A. Gurgel [Brasilia Univ., DF (Brazil). Dept. de Engenharia Mecanica]. E-mail: gurgel@enm.unb.br; Costa, Fernando de S. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Combustao e Propulsao]. E-mail: fernando@cptec.inpe.br

    2000-07-01

    This work presents the results obtained with the simulation of multi fuel micro turbines combustion chambers. In particular, the predictions for the methane and Diesel burning are presented. The appropriate routines of the CHEMKIN III computer code were used.

  7. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  8. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  9. Load Flow and Short Circuit Analysis of the Class III Power System of HANARO

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H. K.; Jung, H. S

    2005-12-15

    The planning, design, and operation of electric power system require engineering studies to assist in the evaluation of the system performance, reliability, safety and economics. The Class III power of HANARO supplies power for not only HANARO but also RIPF and IMEF. The starting current of most ac motors is five to ten times normal full load current. The loads of the Class III power are connected in consecutive orders at an interval for 10 seconds to avoid excessive voltage drop. This technical report deals with the load flow study and motor starting study for the Class III power of HANARO using ETAP(Electrical Transient Analyzer Program) to verify the capacity of the diesel generator. Short-circuit studies are done to determine the magnitude of the prospective currents flowing throughout the power system at various time intervals after a fault occurs. Short-circuit studies can be performed at the planning stage in order to help finalize the system layout, determine voltage levels, and size cables, transformers, and conductors. From this study, we verify the short circuit current capacity of air circuit breaker(ACB) and automatic transfer switch(ATS) of the Class III power.

  10. Robust electrochemical analysis of As(III) integrating with interference tests: A case study in groundwater

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhong-Gang [Nanomaterials and Environmental Detection Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031 (China); Department of Chemistry, University of Science and Technology of China, Hefei 230026 (China); Chen, Xing; Liu, Jin-Huai [Nanomaterials and Environmental Detection Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031 (China); Huang, Xing-Jiu, E-mail: xingjiuhuang@iim.ac.cn [Nanomaterials and Environmental Detection Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031 (China); Department of Chemistry, University of Science and Technology of China, Hefei 230026 (China)

    2014-08-15

    Graphical abstract: - Highlights: • Robust determination of As(III) in Togtoh water samples has been demonstrated. • The results were comparable to that obtained by ICP–AES. • No obvious interference was observed after a series of interference tests. • Robust stability was obtained in long-term measurements. - Abstract: In Togtoh region of Inner Mongolia, northern China, groundwater encountered high concentrations As contamination (greater than 50 μg L{sup −1}) causes an increasing concern. This work demonstrates an electrochemical protocol for robust (efficient and accurate) determination of As(III) in Togtoh water samples using Au microwire electrode without the need of pretreatment or clean-up steps. Considering the complicated conditions of Togtoh water, the efficiency of Au microwire electrode was systematically evaluated by a series of interference tests, stability and reproducibility measurements. No obvious interference on the determination of As(III) was observed. Especially, the influence of humic acid (HA) was intensively investigated. Electrode stability was also observed with long-term measurements (70 days) in Togtoh water solution and under different temperatures (0–35 °C). Excellent reproducibility (RSD:1.28%) was observed from different batches of Au microwire electrodes. The results obtained at Au microwire electrode were comparable to that obtained by inductively coupled plasma atomic emission spectroscopy (ICP–AES), indicating a good accuracy. These evaluations (efficiency, robustness, and accuracy) demonstrated that the Au microwire electrode was able to determine As(III) in application to real environmental samples.

  11. Positional stability experiment and analysis of elongated plasmas in Doublet III

    International Nuclear Information System (INIS)

    Yokomizo, Hideaki

    1984-04-01

    Control systems of the plasma position and shape on Doublet III are explained and experimental results of vertical stability of elongated plasmas are reviewed. Observed results of the vertical instability are qualitatively compared with the predictions from the simplified model and quantitatively compared with the numerical calculations based on a more realistic model. Experiments are in reasonable agreement with the theoretical analyses. (author)

  12. Cost analysis of surgically treated pressure sores stage III and IV.

    NARCIS (Netherlands)

    Filius, A.; Damen, T.H.; Schuijer-Maaskant, K.P.; Polinder, S.; Hovius, S.E.R.; Walbeehm, E.T.

    2013-01-01

    Health-care costs associated with pressure sores are significant and their financial burden is likely to increase even further. The aim of this study was to analyse the direct medical costs of hospital care for surgical treatment of pressure sores stage III and IV. We performed a retrospective chart

  13. Localisation of deformations of the midfacial complex in subjects with class III malocclusions employing thin-plate spline analysis.

    Science.gov (United States)

    Singh, G D; McNamara, J A; Lozanoff, S

    1997-11-01

    This study determines deformations of the midface that contribute to a class III appearance, employing thinplate spline analysis. A total of 135 lateral cephalographs of prepubertal children of European-American descent with either class III malocclusions or a class I molar occlusion were compared. The cephalographs were traced and checked, and 7 homologous landmarks of the midface were identified and digitised. The data sets were scaled to an equivalent size and subjected to Procrustes analysis. These statistical tests indicated significant differences (P spline analysis indicated that both affine and nonaffine transformations contribute towards the total spline for the averaged midfacial configuration. For nonaffine transformations, partial warp 3 had the highest magnitude, indicating the large scale deformations of the midfacial configuration. These deformations affected the palatal landmarks, and were associated with compression of the midfacial complex in the anteroposterior plane predominantly. Partial warp 4 produced some vertical compression of the posterior aspect of the midfacial complex whereas partial warps 1 and 2 indicated localised shape changes of the maxillary alveolus region. large spatial-scale deformations therefore affect the midfacial complex in an anteroposterior axis, in combination with vertical compression and localised distortions. These deformations may represent a developmental diminution of the palatal complex anteroposteriorly that, allied with vertical shortening of midfacial height posteriorly, results in class III malocclusions with a retrusive midfacial profile.

  14. Timing and Magnitude of Initial Change in Disease Activity Score 28 Predicts the Likelihood of Achieving Low Disease Activity at 1 Year in Rheumatoid Arthritis Patients Treated with Certolizumab Pegol: A Post-hoc Analysis of the RAPID 1 Trial

    NARCIS (Netherlands)

    van der Heijde, Désirée; Keystone, Edward C.; Curtis, Jeffrey R.; Landewé, Robert B.; Schiff, Michael H.; Khanna, Dinesh; Kvien, Tore K.; Ionescu, Lucian; Gervitz, Leon M.; Davies, Owen R.; Luijtens, Kristel; Furst, Daniel E.

    2012-01-01

    Objective. To determine the relationship between timing and magnitude of Disease Activity Score [DAS28(ESR)] nonresponse (DAS28 improvement thresholds not reached) during the first 12 weeks of treatment with certolizumab pegol (CZP) plus methotrexate, and the likelihood of achieving low disease

  15. Feasibility Analysis of a Seabed Filtration Intake System for the Shoaiba III Expansion Reverse Osmosis Plant

    KAUST Repository

    Rodríguez, Luis Raúl

    2012-06-01

    The ability to economically desalinate seawater in arid regions of the world has become a vital advancement to overcome the problem on freshwater availability, quality, and reliability. In contrast with the major capital and operational costs for desalination plants represented by conventional open ocean intakes, subsurface intakes allow the extraction of high quality feed water at minimum costs and reduced environmental impact. A seabed filter is a subsurface intake that consists of a submerged slow sand filter, with benefits of organic matter removal and pathogens, and low operational cost. A site investigation was carried out through the southern coast of the Red Sea in Saudi Arabia, from King Abdullah University of Science and Technology down to 370 kilometers south of Jeddah. A site adjacent to the Shoaiba desalination plant was selected to assess the viability of constructing a seabed filter. Grain sieve size analysis, porosity and hydraulic conductivity permeameter measurements were performed on the collected sediment samples. Based on these results, it was concluded that the characteristics at the Shoaiba site allow for the construction of a seabed filtration system. A seabed filter design is proposed for the 150,000 m3/d Shoaiba III expansion project, a large-scale Reverse Osmosis desalination plant. A filter design with a filtration rate of 7 m/d through an area of 6,000 m2 is proposed to meet the demand of one of the ten desalination trains operating at the plant. The filter would be located 90 meters offshore where hydraulic conductivity of the sediment is high, and mud percentage is minimal. The thin native marine sediment layer is insufficient to provide enough water filtration, and consequently the proposed solution involves excavating the limestone rock and filling it with different layers of non-native sand and gravel of increasing grain size. An initial assessment of the area around Shoaiba showed similar sedimentological conditions that could

  16. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...... improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today...

  17. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  18. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...

  19. [Analysis of prognostic factors after radical resection in 628 patients with stage II or III colon cancer].

    Science.gov (United States)

    Qin, Qiong; Yang, Lin; Zhou, Ai-ping; Sun, Yong-kun; Song, Yan; DU, Feng; Wang, Jin-wan

    2013-03-01

    To analyze the clinicopathologic factors related to recurrence and metastasis of stage II or III colon cancer after radical resection. The clinical and pathological data of 628 patients with stage II or III colon cancer after radical resection from Jan. 2005 to Dec. 2008 in our hospital were retrospectively reviewed and analyzed. The overall recurrence and metastasis rate was 28.5% (179/628). The 5-year disease-free survival (DFS) rate was 70.3% and 5-year overall survival (OS) rate was 78.5%. Univariate analysis showed that age, smoking intensity, depth of tumor invasion, lymph node metastasis, TNM stage, gross classification, histological differentiation, blood vessel tumor embolus, tumor gross pathology, multiple primary tumors, preoperative and postoperative serum concentration of CEA and CA19-9, and the regimen of adjuvant chemotherapy were correlated to recurrence and metastasis of colon cancer after radical resection. Multivariate analysis showed that regional lymph node metastasis, TNM stage, the regimen of postoperative adjuvant chemotherapy, and preoperative serum concentration of CEA and CA19-9 were independent factors affecting the prognosis of colon cancer patients. Regional lymph node metastasis, TNM stage, elevated preoperative serum concentration of CEA and CA19-9, the regimen of postoperative adjuvant chemotherapy with single fluorouracil type drug are independent risk factors of recurrence and metastasis in patients with stage II-III colon cancer after radical resection.

  20. Multiple Improvements of Multiple Imputation Likelihood Ratio Tests

    OpenAIRE

    Chan, Kin Wai; Meng, Xiao-Li

    2017-01-01

    Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...

  1. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  2. Penggunaan Elaboration Likelihood Model dalam Menganalisis Penerimaan Teknologi Informasi

    OpenAIRE

    vitrian, vitrian2

    2010-01-01

    This article discusses some technology acceptance models in an organization. Thorough analysis of how technology is acceptable help managers make any planning to implement new teachnology and make sure that new technology could enhance organization's performance. Elaboration Likelihood Model (ELM) is the one which sheds light on some behavioral factors in acceptance of information technology. The basic tenet of ELM states that human behavior in principle can be influenced through central r...

  3. Democracy, Autocracy and the Likelihood of International Conflict

    OpenAIRE

    Tangerås, Thomas

    2008-01-01

    This is a game-theoretic analysis of the link between regime type and international conflict. The democratic electorate can credibly punish the leader for bad conflict outcomes, whereas the autocratic selectorate cannot. For the fear of being thrown out of office, democratic leaders are (i) more selective about the wars they initiate and (ii) on average win more of the wars they start. Foreign policy behaviour is found to display strategic complementarities. The likelihood of interstate war, ...

  4. Heterologous gene expression and functional analysis of a type III polyketide synthase from Aspergillus niger NRRL 328

    Energy Technology Data Exchange (ETDEWEB)

    Kirimura, Kohtaro, E-mail: kkohtaro@waseda.jp; Watanabe, Shotaro; Kobayashi, Keiichi

    2016-05-13

    Type III polyketide synthases (PKSs) catalyze the formation of pyrone- and resorcinol-types aromatic polyketides. The genomic analysis of the filamentous fungus Aspergillus niger NRRL 328 revealed that this strain has a putative gene (chr-8-2: 2978617–2979847) encoding a type III PKS, although its functions are unknown. In this study, for functional analysis of this putative type III PKS designated as An-CsyA, cloning and heterologous expression of the An-CsyA gene (An-csyA) in Escherichia coli were performed. Recombinant His-tagged An-CsyA was successfully expressed in E. coli BL21 (DE3), purified by Ni{sup 2+}-affinity chromatography, and used for in vitro assay. Tests on the substrate specificity of the His-tagged An-CsyA with myriad acyl-CoAs as starter substrates and malonyl-CoA as extender substrate showed that His-tagged An-CsyA accepted fatty acyl-CoAs (C2-C14) and produced triketide pyrones (C2-C14), tetraketide pyrones (C2-C10), and pentaketide resorcinols (C10-C14). Furthermore, acetoacetyl-CoA, malonyl-CoA, isobutyryl-CoA, and benzoyl-CoA were also accepted as starter substrates, and both of triketide pyrones and tetraketide pyrones were produced. It is noteworthy that the His-tagged An-CsyA produced polyketides from malonyl-CoA as starter and extender substrates and produced tetraketide pyrones from short-chain fatty acyl-CoAs as starter substrates. Therefore, this is the first report showing the functional properties of An-CsyA different from those of other fungal type III PKSs. -- Highlights: •Type III PKS from Aspergillus niger NRRL 328, An-CsyA, was cloned and characterized. •An-CsyA produced triketide pyrones, tetraketide pyrones and pentaketide resorcinols. •Functional properties of An-CsyA differs from those of other fungal type III PKSs.

  5. Heterologous gene expression and functional analysis of a type III polyketide synthase from Aspergillus niger NRRL 328

    International Nuclear Information System (INIS)

    Kirimura, Kohtaro; Watanabe, Shotaro; Kobayashi, Keiichi

    2016-01-01

    Type III polyketide synthases (PKSs) catalyze the formation of pyrone- and resorcinol-types aromatic polyketides. The genomic analysis of the filamentous fungus Aspergillus niger NRRL 328 revealed that this strain has a putative gene (chr-8-2: 2978617–2979847) encoding a type III PKS, although its functions are unknown. In this study, for functional analysis of this putative type III PKS designated as An-CsyA, cloning and heterologous expression of the An-CsyA gene (An-csyA) in Escherichia coli were performed. Recombinant His-tagged An-CsyA was successfully expressed in E. coli BL21 (DE3), purified by Ni"2"+-affinity chromatography, and used for in vitro assay. Tests on the substrate specificity of the His-tagged An-CsyA with myriad acyl-CoAs as starter substrates and malonyl-CoA as extender substrate showed that His-tagged An-CsyA accepted fatty acyl-CoAs (C2-C14) and produced triketide pyrones (C2-C14), tetraketide pyrones (C2-C10), and pentaketide resorcinols (C10-C14). Furthermore, acetoacetyl-CoA, malonyl-CoA, isobutyryl-CoA, and benzoyl-CoA were also accepted as starter substrates, and both of triketide pyrones and tetraketide pyrones were produced. It is noteworthy that the His-tagged An-CsyA produced polyketides from malonyl-CoA as starter and extender substrates and produced tetraketide pyrones from short-chain fatty acyl-CoAs as starter substrates. Therefore, this is the first report showing the functional properties of An-CsyA different from those of other fungal type III PKSs. -- Highlights: •Type III PKS from Aspergillus niger NRRL 328, An-CsyA, was cloned and characterized. •An-CsyA produced triketide pyrones, tetraketide pyrones and pentaketide resorcinols. •Functional properties of An-CsyA differs from those of other fungal type III PKSs.

  6. Efficient Bit-to-Symbol Likelihood Mappings

    Science.gov (United States)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  7. Likelihood-ratio-based biometric verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    2002-01-01

    This paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that for single-user verification the likelihood ratio is optimal.

  8. Likelihood Ratio-Based Biometric Verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal.

  9. Analysis of novel silicon and III-V solar cells by simulation and experiment; Analyse neuartiger Silizium- und III-V-Solarzellen mittels Simulation und Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hermle, Martin

    2008-11-27

    This work presents various simulation studies of silicon and III-V solar cells. For standard silicon solar cells, one of the critical parameters to obtain good performance, is the rear side recombination velocity. The optical and electrical differences of the different cell structures were determined. The optical differences and the effective recombination velocity Sback of the different rear side structures for 1 Ohmcm material were extracted. Beside standard silicon solar cells, back junction silicon solar cells were investigated. Especially the influence of the front surface field and the electrical shading due to the rear side, was investigated. In the last two chapters, III-V solar cells were analysed. For the simulation of III-V multi-junction solar cells, the simulation of the tunneldiode is the basic prerequisite. In this work, the numerical calibration of an GaAs tunneldiode was achieved by using an non-local tunnel model. Using this model, it was possible to successfully simulate a III-V tandem solar cell. The last chapter deals with an optimization of the III-V 3-junction cell for space applications. Especially the influence of the GaAs middle cell was investigated. Due to structural changes, the end-of-life efficiency was drastically increased.

  10. Detection and genome analysis of a lineage III peste des petits ruminants virus in Kenya in 2011

    International Nuclear Information System (INIS)

    Dundon, W.G.; Kihu, S.M.; Gitao, G.C.; Bebora, L.C.; John, N.M.; Ogugi, J.O.; Loitsch, A.; Diallo, A.

    2016-01-01

    Full text: In May 2011 in Turkana County, north-western Kenya, tissue samples were collected from goats suspected of having died of peste des petits ruminant (PPR) disease, an acute viral disease of small ruminants. The samples were processed and tested by reverse transcriptase PCR for the presence of PPR viral RNA. The positive samples were sequenced and identified as belonging to peste des petits ruminants virus (PPRV) lineage III. Full-genome analysis of one of the positive samples revealed that the virus causing disease in Kenya in 2011 was 95.7% identical to the full genome of a virus isolated in Uganda in 2012 and that a segment of the viral fusion gene was 100% identical to that of a virus circulating in Tanzania in 2013. These data strongly indicate transboundary movement of lineage III viruses between Eastern Africa countries and have significant implications for surveillance and control of this important disease as it moves southwards in Africa. (author)

  11. Analysis of BWR/Mark III drywell failure during degraded core accidents

    International Nuclear Information System (INIS)

    Yang, J.W.

    1983-01-01

    The potential for a hydrogen detonation due to the accumulation of a large amount of hydrogen in the drywell region of a BWR Mark III containment is analyzed. Loss of integrity of the drywell wall causes a complete bypass of the suppression pool and leads to pressurization of the containment building. However, the predicted peak containment pressure does not exceed the estimates of containment failure pressure

  12. Cost analysis of surgically treated pressure sores stage III and IV.

    Science.gov (United States)

    Filius, A; Damen, T H C; Schuijer-Maaskant, K P; Polinder, S; Hovius, S E R; Walbeehm, E T

    2013-11-01

    Health-care costs associated with pressure sores are significant and their financial burden is likely to increase even further. The aim of this study was to analyse the direct medical costs of hospital care for surgical treatment of pressure sores stage III and IV. We performed a retrospective chart study of patients who were surgically treated for stage III and IV pressure sores between 2007 and 2010. Volumes of health-care use were obtained for all patients and direct medical costs were subsequently calculated. In addition, we evaluated the effect of location and number of pressure sores on total costs. A total of 52 cases were identified. Average direct medical costs in hospital were €20,957 for the surgical treatment of pressure sores stage III or IV; average direct medical costs for patients with one pressure sore on an extremity (group 1, n = 5) were €30,286, €10,113 for patients with one pressure sore on the trunk (group 2, n = 32) and €40,882 for patients with multiple pressure sores (group 3, n = 15). The additional costs for patients in group 1 and group 3 compared to group 2 were primarily due to longer hospitalisation. The average direct medical costs for surgical treatment of pressure sores stage III and IV were high. Large differences in costs were related to the location and number of pressure sores. Insight into the distribution of these costs allows identification of high-risk patients and enables the development of specific cost-reducing measures. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  13. Le Fort III Distraction With Internal vs External Distractors: A Cephalometric Analysis.

    Science.gov (United States)

    Robertson, Kevin J; Mendez, Bernardino M; Bruce, William J; McDonnell, Brendan D; Chiodo, Michael V; Patel, Parit A

    2018-05-01

    This study compares the change in midface position following Le Fort III advancement using either rigid external distraction (group 1) or internal distraction (group 2). We hypothesized that, with reference to right-facing cephalometry, internal distraction would result in increased clockwise rotation and inferior displacement of the midface. Le Fort III osteotomies and standardized distraction protocols were performed on 10 cadaveric specimens per group. Right-facing lateral cephalograms were traced and compared across time points to determine change in position at points orbitale, anterior nasal spine (ANS), A-point, and angle ANB. Institutional. Twenty cadaveric head specimens. Standard subcranial Le Fort III osteotomies were performed from a coronal approach and adequately mobilized. The specified distraction mechanism was applied and advanced by 15 mm. Changes of position were calculated at various skeletal landmarks: orbitale, ANS, A-point, and ANB. Group 1 demonstrated relatively uniform x-axis advancement with minimal inferior repositioning at the A-point, ANS, and orbitale. Group 2 demonstrated marked variation in x-axis advancement among the 3 points, along with a significant inferior repositioning and clockwise rotation of the midface ( P External distraction resulted in more uniform advancement of the midface, whereas internal distraction resulted in greater clockwise rotation and inferior displacement. External distraction appears to provide increased vector control of the midface, which is important in creating a customized distraction plan based on the patient's individual occlusal and skeletal needs.

  14. Early orthodontic treatment for Class III malocclusion: A systematic review and meta-analysis.

    Science.gov (United States)

    Woon, See Choong; Thiruvenkatachari, Badri

    2017-01-01

    Class III malocclusion affects between 5% and 15% of our population. The 2 most common dilemmas surrounding Class III treatment are the timing of treatment and the type of appliance. A number of appliances have been used to correct a Class III skeletal discrepancy, but there is little evidence available on their effectiveness in the long term. Similarly, early treatment of Class III malocclusion has been practiced with increasing interest. However, there has been no solid evidence on the benefits in the long term. The aim of this systematic review was to evaluate the effectiveness of orthodontic/orthopedic methods used in the early treatment of Class III malocclusion in the short and long terms. Several sources were used to identify all relevant studies independently of language. The Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Embase (Ovid), and MEDLINE (Ovid) were searched to June 2016. The selection criteria included randomized controlled trials (RCTs) and prospective controlled clinical trials (CCTs) of children between the ages of 7 and 12 years on early treatment with any type of orthodontic/orthopedic appliance compared with another appliance to correct Class III malocclusion or with an untreated control group. The primary outcome measure was correction of reverse overjet, and the secondary outcomes included skeletal changes, soft tissue changes, quality of life, patient compliance, adverse effect, Peer Assessment Rating score, and treatment time. The search results were screened for inclusion, and the data extracted by 2 independent authors. The data were analyzed using software (version 5.1, Review Manager; The Nordic Cochrane Centre, The Cochrane Collaboration; Copenhagen, Denmark). The mean differences with 95% confidence intervals were expressed for the continuous data. Random effects were carried out with high levels of clinical or statistical heterogeneity and fixed affects when the heterogeneity was low

  15. Superior outcome of women with stage I/II cutaneous melanoma: Pooled analysis of four European organisation for research and treatment of cancer phase III trials

    NARCIS (Netherlands)

    A. Joosse (Arjen); S. Collette (Sandra); S. Suciu (Stefan); T.E.C. Nijsten (Tamar); F.J. Lejeune (Ferdy); U.R. Kleeberg (Ulrich); J.W.W. Coebergh (Jan Willem); A.M.M. Eggermont (Alexander); E.G.E. de Vries (Elisabeth)

    2012-01-01

    textabstractPurpose: Several studies observed a female advantage in the prognosis of cutaneous melanoma, for which behavioral factors or an underlying biologic mechanism might be responsible. Using complete and reliable follow-up data from four phase III trials of the European Organisation for

  16. Physical constraints on the likelihood of life on exoplanets

    Science.gov (United States)

    Lingam, Manasvi; Loeb, Abraham

    2018-04-01

    One of the most fundamental questions in exoplanetology is to determine whether a given planet is habitable. We estimate the relative likelihood of a planet's propensity towards habitability by considering key physical characteristics such as the role of temperature on ecological and evolutionary processes, and atmospheric losses via hydrodynamic escape and stellar wind erosion. From our analysis, we demonstrate that Earth-sized exoplanets in the habitable zone around M-dwarfs seemingly display much lower prospects of being habitable relative to Earth, owing to the higher incident ultraviolet fluxes and closer distances to the host star. We illustrate our results by specifically computing the likelihood (of supporting life) for the recently discovered exoplanets, Proxima b and TRAPPIST-1e, which we find to be several orders of magnitude smaller than that of Earth.

  17. Analysis of the lambda 5696 Carbon III line in the O stars

    International Nuclear Information System (INIS)

    Cardona-Nunez, O.

    1978-01-01

    Lines of twice-ionized Carbon, specifically lambda 5695 and lambda 8500, in the O stars were analyzed on the basis of a detailed solution of the coupled statistical-equilibrium and transfer equations for a multilevel, multiline, multi-ion ensemble. It is significant that these plane-parallel non-LTE statistical equilibrium calculations reproduce successfully the observed emission a lambda 5696 and absorption at lambda 8500. The 3p 1 P 0 -3d 1 D transition is found to come into emission at the observed temperatures for both main-sequence and low-gravity objects. The equivalent widths of the emission and absorption lines agree very well with those measured for O stars. In these stars the basic physical mechanism responsible for this phenomenon is the overpopulation of 3d by means of direct recombination and cascades from upper states (with dielectronic recombination taking part in the earliest types) with subsequent cascade to 3p. The 3p state is drained by the two-electron transitions coupling 3p to the 2p 2 ( 1 S, 1 D) states; emission in the 3s 1 S-3p 1 P 0 line is thus prevented. The mechanism of formation of C III is different from that of N III because of dielectronic recombination is not necessary in the former case. The fact that the C III emission line can be produced in a static nonextended atmosphere in radiative equilibrium indicates that the presence of emission lines is not sufficient evidence for the existence of extended atmospheres

  18. Probability of success for phase III after exploratory biomarker analysis in phase II.

    Science.gov (United States)

    Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver

    2017-05-01

    The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Genomic organization, sequence characterization and expression analysis of Tenebrio molitor apolipophorin-III in response to an intracellular pathogen, Listeria monocytogenes.

    Science.gov (United States)

    Noh, Ju Young; Patnaik, Bharat Bhusan; Tindwa, Hamisi; Seo, Gi Won; Kim, Dong Hyun; Patnaik, Hongray Howrelia; Jo, Yong Hun; Lee, Yong Seok; Lee, Bok Luel; Kim, Nam Jung; Han, Yeon Soo

    2014-01-25

    Apolipophorin III (apoLp-III) is a well-known hemolymph protein having a functional role in lipid transport and immune response of insects. We cloned full-length cDNA encoding putative apoLp-III from larvae of the coleopteran beetle, Tenebrio molitor (TmapoLp-III), by identification of clones corresponding to the partial sequence of TmapoLp-III, subsequently followed with full length sequencing by a clone-by-clone primer walking method. The complete cDNA consists of 890 nucleotides, including an ORF encoding 196 amino acid residues. Excluding a putative signal peptide of the first 20 amino acid residues, the 176-residue mature apoLp-III has a calculated molecular mass of 19,146Da. Genomic sequence analysis with respect to its cDNA showed that TmapoLp-III was organized into four exons interrupted by three introns. Several immune-related transcription factor binding sites were discovered in the putative 5'-flanking region. BLAST and phylogenetic analyses reveal that TmapoLp-III has high sequence identity (88%) with Tribolium castaneum apoLp-III but shares little sequence homologies (molitor. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Analysis of the SPERT III E-core experiment using the EUREKA-2 code

    International Nuclear Information System (INIS)

    Harami, Taikan; Uemura, Mutsumi; Ohnishi, Nobuaki

    1986-09-01

    EUREKA-2, a coupled nuclear thermal hydrodynamic kinetic code, was adapted for the testing of models and methods. Code evaluations were made with the reactivity addition experiments of the SPERT III E-Core, a slightly enriched oxide core. The code was tested for non damaging power excursions including a wide range of initial operating conditions, such as cold-startup, hot-startup, hot-standby and operating-power initial conditions. Comparisons resulted in a good agreement within the experimental errors between calculated and experimental power, energy, reactivity and clad surface temperature. (author)

  1. Analysis of the MPEG-1 Layer III (MP3) Algorithm using MATLAB

    CERN Document Server

    Thiagarajan, Jayaraman

    2011-01-01

    The MPEG-1 Layer III (MP3) algorithm is one of the most successful audio formats for consumer audio storage and for transfer and playback of music on digital audio players. The MP3 compression standard along with the AAC (Advanced Audio Coding) algorithm are associated with the most successful music players of the last decade. This book describes the fundamentals and the MATLAB implementation details of the MP3 algorithm. Several of the tedious processes in MP3 are supported by demonstrations using MATLAB software. The book presents the theoretical concepts and algorithms used in the MP3 stand

  2. Characterization of the TRIGA Mark III reactor for k0-neutron activation analysis

    International Nuclear Information System (INIS)

    Diaz R, O.; Herrera P, E.; Lopez R, M.C.

    1997-01-01

    The non-ideality of the epithermal neutron flux distribution in a a reactor site parameter (α), the thermal-to-epithermal neutron ratio (f), the irradiation channel neutron temperature (T n ) and the k 0 -factors for more than 20 isotopes were determined in the 3 typical irradiation positions of the TRIGA Mark III reactor of the National Nuclear Research Institute, Salazar, Mexico, using different experimental methods with conventional and non-conventional monitors. This characterization is used in the k 0 -method of NAA, recently introduced at the Institute. (author). 21 refs., 3 figs., 5 tabs

  3. A structural modification of the two dimensional fuel behaviour analysis code FEMAXI-III with high-speed vectorized operation

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki; Ishiguro, Misako; Yamazaki, Takashi; Tokunaga, Yasuo.

    1985-02-01

    Though the two-dimensional fuel behaviour analysis code FEMAXI-III has been developed by JAERI in form of optimized scalar computer code, the call for more efficient code usage generally arized from the recent trends like high burn-up and load follow operation asks the code into further modification stage. A principal aim of the modification is to transform the already implemented scalar type subroutines into vectorized forms to make the programme structure efficiently run on high-speed vector computers. The effort of such structural modification has been finished on a fair way to success. The benchmarking two tests subsequently performed to examine the effect of the modification led us the following concluding remarks: (1) In the first benchmark test, comparatively high-burned three fuel rods that have been irradiated in HBWR, BWR, and PWR condition are prepared. With respect to all cases, a net computing time consumed in the vectorized FEMAXI is approximately 50 % less than that consumed in the original one. (2) In the second benchmark test, a total of 26 PWR fuel rods that have been irradiated in the burn-up ranges of 13-30 MWd/kgU and subsequently power ramped in R2 reactor, Sweden is prepared. In this case the code is purposed to be used for making an envelop of PCI-failure threshold through 26 times code runs. Before coming to the same conclusion, the vectorized FEMAXI-III consumed a net computing time 18 min., while the original FEMAXI-III consumed a computing time 36 min. respectively. (3) The effects obtained from such structural modification are found to be significantly attributed to saving a net computing time in a mechanical calculation in the vectorized FEMAXI-III code. (author)

  4. Review of Elaboration Likelihood Model of persuasion

    OpenAIRE

    藤原, 武弘; 神山, 貴弥

    1989-01-01

    This article mainly introduces Elaboration Likelihood Model (ELM), proposed by Petty & Cacioppo, that is, a general attitude change theory. ELM posturates two routes to persuasion; central and peripheral route. Attitude change by central route is viewed as resulting from a diligent consideration of the issue-relevant informations presented. On the other hand, attitude change by peripheral route is viewed as resulting from peripheral cues in the persuasion context. Secondly we compare these tw...

  5. THE SPECTRUM AND TERM ANALYSIS OF CO iii MEASURED USING FOURIER TRANSFORM AND GRATING SPECTROSCOPY

    Energy Technology Data Exchange (ETDEWEB)

    Smillie, D. G.; Pickering, J. C. [Blackett Laboratory, Imperial College London, London SW7 2AZ (United Kingdom); Nave, G. [National Institute of Standards and Technology, Gaithersburg, MD 20899-8422 (United States); Smith, P. L., E-mail: j.pickering@imperial.ac.uk [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2016-03-15

    The spectrum of Co iii has been recorded in the region 1562–2564 Å (64,000 cm{sup −1}–39,000 cm{sup −1}) by Fourier transform (FT) spectroscopy, and in the region 1317–2500 Å (164,000 cm{sup −1}–40,000 cm{sup −1}) using a 10.7 m grating spectrograph with phosphor image plate detectors. The spectrum was excited in a cobalt–neon Penning discharge lamp. We classified 514 Co iii lines measured using FT spectroscopy, the strongest having wavenumber uncertainties approaching 0.004 cm{sup −1} (approximately 0.2 mÅ at 2000 Å, or 1 part in 10{sup 7}), and 240 lines measured with grating spectroscopy with uncertainties between 5 and 10 mÅ. The wavelength calibration of 790 lines of Raassen and Ortí Ortin and 87 lines from Shenstone has been revised and combined with our measurements to optimize the values of all but one of the 288 previously reported energy levels. Order of magnitude reductions in uncertainty for almost two-thirds of the 3d{sup 6}4s and almost half of the 3d{sup 6}4p revised energy levels are obtained. Ritz wavelengths have been calculated for an additional 100 forbidden lines. Eigenvector percentage compositions for the energy levels and predicted oscillator strengths have been calculated using the Cowan code.

  6. The Spectrum and Term Analysis of Co III Measured Using Fourier Transform and Grating Spectroscopy

    Science.gov (United States)

    Smillie, D. G.; Pickering, J. C.; Nave, G.; Smith, P. L.

    2016-03-01

    The spectrum of Co III has been recorded in the region 1562-2564 Å (64,000 cm-1-39,000 cm-1) by Fourier transform (FT) spectroscopy, and in the region 1317-2500 Å (164,000 cm-1-40,000 cm-1) using a 10.7 m grating spectrograph with phosphor image plate detectors. The spectrum was excited in a cobalt-neon Penning discharge lamp. We classified 514 Co III lines measured using FT spectroscopy, the strongest having wavenumber uncertainties approaching 0.004 cm-1 (approximately 0.2 mÅ at 2000 Å, or 1 part in 107), and 240 lines measured with grating spectroscopy with uncertainties between 5 and 10 mÅ. The wavelength calibration of 790 lines of Raassen & Ortí Ortin and 87 lines from Shenstone has been revised and combined with our measurements to optimize the values of all but one of the 288 previously reported energy levels. Order of magnitude reductions in uncertainty for almost two-thirds of the 3d64s and almost half of the 3d64p revised energy levels are obtained. Ritz wavelengths have been calculated for an additional 100 forbidden lines. Eigenvector percentage compositions for the energy levels and predicted oscillator strengths have been calculated using the Cowan code.

  7. Experimental analysis of the power curve sensitivity test series at ROSA-III

    International Nuclear Information System (INIS)

    Koizumi, Y.; Iriko, M.; Yonomoto, T.; Tasaka, K.

    1985-01-01

    The rig of safety assessment (ROSA)-III facility is a volumetrically scaled (1/424) boiling water reactor (BWR/6) system with an electrically heated core designed for integral LOCA and ECCS tests. Seven recirculation pump suction line break LOCA experiments were conducted at the ROSA-III facility in order to examine the effect of the initial stored heat of a fuel rod on the peak cladding temperature (PCT). The break size was changed from 200% to 5% in the test series and a failure of a high pressure core spray (HPCS) diesel generator was assumed. Three power curves which represented conservative, realistic and zero initial stored heat, respectively, were used. In a large break LOCA such as 200% or 50% breaks, the initial stored heat in a fuel rod has a large effect on the cladding surface temperature because core uncovery occurs before all the initial stored heat is released, whereas in a small break LOCA such as a 5% break little effect is observed because core uncovery occurs after the initial stored heat is released. The maximum PCTs for the conservative initial stored heat case was 925 K, obtained in the 50% break experiment, and that for the realistic initial stored heat case was 835 K, obtained in the 5% break experiment. (orig./HP)

  8. Analysis of the characteristics of patients with open tibial fractures of Gustilo and Anderson type III

    Directory of Open Access Journals (Sweden)

    Frederico Carlos Jaña Neto

    2016-04-01

    Full Text Available OBJECTIVE: To analyze the characteristics of patients with Gustilo-Anderson Type III open tibial fractures treated at a tertiary care hospital in São Paulo between January 2013 and August 2014. METHODS: This was a cross-sectional retrospective study. The following data were gathered from the electronic medical records: age; gender; diagnosis; trauma mechanism; comorbidities; associated fractures; Gustilo and Anderson, Tscherne and AO classifications; treatment (initial and definitive; presence of compartment syndrome; primary and secondary amputations; MESS (Mangled Extremity Severity Score index; mortality rate; and infection rate. RESULTS: 116 patients were included: 81% with fracture type IIIA, 12% IIIB and 7% IIIC; 85% males; mean age 32.3 years; and 57% victims of motorcycle accidents. Tibial shaft fractures were significantly more prevalent (67%. Eight patients were subjected to amputation: one primary case and seven secondary cases. Types IIIC (75% and IIIB (25% predominated among the patients subjected to secondary amputation. The MESS index was greater than 7 in 88% of the amputees and in 5% of the limb salvage group. CONCLUSION: The profile of patients with open tibial fracture of Gustilo and Anderson Type III mainly involved young male individuals who were victims of motorcycle accidents. The tibial shaft was the segment most affected. Only 7% of the patients underwent amputation. Given the current controversy in the literature about amputation or salvage of severely injured lower limbs, it becomes necessary to carry out prospective studies to support clinical decisions.

  9. A user input manual for single fuel rod behaviour analysis code FEMAXI-III

    International Nuclear Information System (INIS)

    Saito, Hiroaki; Yanagisawa, Kazuaki; Fujita, Misao.

    1983-03-01

    Principal objectives of Safety related research in connection with lighr water reactor fuel rods under normal operating condition are mainly addressed 1) to assess fuel integrity under steady state condition and 2) to generate initial condition under hypothetical accident. These assessments have to be relied principally upon steady state fuel behaviour computing code that is able to calculate fuel conditions to tbe occurred in a various manner. To achieve these objectives, efforts have been made to develope analytical computer code that calculates in-reactor fuel rod behaviour in best estimate manner. The computer code developed for the prediction of the long-term burnup response of single fuel rod under light water reactor condition is the third in a series of code versions:FEMAMI-III. The code calculates temperature, rod internal gas pressure, fission gas release and pellet-cladding interaction related rod deformation as a function of time-dependent fuel rod power and coolant boundary conditions. This document serves as a user input manual for the code FEMAMI-III which has opened to the public in year of 1982. A general description of the code input and output are included together with typical examples of input data. A detailed description of structures, analytical submodels and solution schemes in the code shall be given in the separate document to be published. (author)

  10. Epithermal neutron flux characterization of the TRIGA Mark III reactor, Salazar, Mexico, for use in Internal Neutron Activation Analysis

    International Nuclear Information System (INIS)

    Diaz Rizo, O.; Herrera Peraza, E.

    1996-01-01

    The non ideality of the epithermal neutron flux distribution at a reactor site parameter (made, using Chloramine-T method. Radiochemical purity and stability of the labelled product were determined by radiochromatography. The labelled Melagenine-II showed two radioactive fractions thermal-to-epithermal neutron ratio (f) were determined in the 3 typical irradiations positions of the TRIGA Mark III reactor of the National Nuclear Research Institute, Salazar, Mexico, using the Cd-ratio for multi monitor and bare bi-isotopic monitor methods respectively. This characterization is of use in the K o - method of neutron activation analysis, recently introduced at the Institute

  11. Capture programs, analysis, data graphication for the study of the thermometry of the TRIGA Mark III reactor core

    International Nuclear Information System (INIS)

    Paredes G, L.C.

    1991-05-01

    This document covers the explanation of the capture programs, analysis and graphs of the data obtained during the measurement of the temperatures of the instrumented fuel element of the TRIGA Mark III reactor and of the coolant one near to this fuel, using the conversion card from Analogic to Digital of 'Data Translation', and using a signal conditioner for five temperature measurers with the help of thermo par type K, developed by the Simulation and Control of the nuclear systems management department, which gives a signal from 0 to 10 Vcd for an interval of temperature of 0 to 1000 C. (Author)

  12. Cancer care coordinators in stage III colon cancer: a cost-utility analysis.

    Science.gov (United States)

    Blakely, Tony; Collinson, Lucie; Kvizhinadze, Giorgi; Nair, Nisha; Foster, Rachel; Dennett, Elizabeth; Sarfati, Diana

    2015-08-05

    There is momentum internationally to improve coordination of complex care pathways. Robust evaluations of such interventions are scarce. This paper evaluates the cost-utility of cancer care coordinators for stage III colon cancer patients, who generally require surgery followed by chemotherapy. We compared a hospital-based nurse cancer care coordinator (CCC) with 'business-as-usual' (no dedicated coordination service) in stage III colon cancer patients in New Zealand. A discrete event microsimulation model was constructed to estimate quality-adjusted life-years (QALYs) and costs from a health system perspective. We used New Zealand data on colon cancer incidence, survival, and mortality as baseline input parameters for the model. We specified intervention input parameters using available literature and expert estimates. For example, that a CCC would improve the coverage of chemotherapy by 33% (ranging from 9 to 65%), reduce the time to surgery by 20% (3 to 48%), reduce the time to chemotherapy by 20% (3 to 48%), and reduce patient anxiety (reduction in disability weight of 33%, ranging from 0 to 55%). Much of the direct cost of a nurse CCC was balanced by savings in business-as-usual care coordination. Much of the health gain was through increased coverage of chemotherapy with a CCC (especially older patients), and reduced time to chemotherapy. Compared to 'business-as-usual', the cost per QALY of the CCC programme was $NZ 18,900 (≈ $US 15,600; 95% UI: $NZ 13,400 to 24,600). By age, the CCC intervention was more cost-effective for colon cancer patients costs, meaning the cost-effectiveness was roughly comparable between ethnic groups. Such a nurse-led CCC intervention in New Zealand has acceptable cost-effectiveness for stage III colon cancer, meaning it probably merits funding. Each CCC programme will differ in its likely health gains and costs, making generalisation from this evaluation to other CCC interventions difficult. However, this evaluation suggests

  13. Building human resources capability in health care: a global analysis of best practice--Part III.

    Science.gov (United States)

    Zairi, M

    1998-01-01

    This is the last part of a series of three papers which discussed very comprehensively best practice applications in human resource management by drawing special inferences to the healthcare context. It emerged from parts I and II that high performing organisations plan and intend to build sustainable capability through a systematic consideration of the human element as the key asset and through a continuous process of training, developing, empowering and engaging people in all aspects of organisational excellence. Part III brings this debate to a close by demonstrating what brings about organisational excellence and proposes a road map for effective human resource development and management, based on world class standards. Healthcare human resource professionals can now rise to the challenge and plan ahead for building organisational capability and sustainable performance.

  14. Crystallization and preliminary crystallographic analysis of an octaketide-producing plant type III polyketide synthase

    Energy Technology Data Exchange (ETDEWEB)

    Morita, Hiroyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS), 11 Minamiooya, Machida, Tokyo 194-8511 (Japan); Kondo, Shin; Kato, Ryohei [Innovation Center Yokohama, Mitsubishi Chemical Corporation, 1000 Kamoshida, Aoba, Yokohama, Kanagawa 227-8502 (Japan); Wanibuchi, Kiyofumi; Noguchi, Hiroshi [School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka 422-8526 (Japan); Sugio, Shigetoshi, E-mail: sugio.shigetoshi@mw.m-kagaku.co.jp [Innovation Center Yokohama, Mitsubishi Chemical Corporation, 1000 Kamoshida, Aoba, Yokohama, Kanagawa 227-8502 (Japan); Abe, Ikuro, E-mail: sugio.shigetoshi@mw.m-kagaku.co.jp [School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka 422-8526 (Japan); PRESTO, Japan Science and Technology Agency, Kawaguchi, Saitama 332-0012 (Japan); Kohno, Toshiyuki, E-mail: sugio.shigetoshi@mw.m-kagaku.co.jp [Mitsubishi Kagaku Institute of Life Sciences (MITILS), 11 Minamiooya, Machida, Tokyo 194-8511 (Japan)

    2007-11-01

    Octaketide synthase from A. arborescens has been overexpressed in E. coli, purified and crystallized. Diffraction data have been collected to 2.6 Å. Octaketide synthase (OKS) from Aloe arborescens is a plant-specific type III polyketide synthase that produces SEK4 and SEK4b from eight molecules of malonyl-CoA. Recombinant OKS expressed in Escherichia coli was crystallized by the hanging-drop vapour-diffusion method. The crystals belonged to space group I422, with unit-cell parameters a = b = 110.2, c = 281.4 Å, α = β = γ = 90.0°. Diffraction data were collected to 2.6 Å resolution using synchrotron radiation at BL24XU of SPring-8.

  15. Application of an array processor to the analysis of magnetic data for the Doublet III tokamak

    International Nuclear Information System (INIS)

    Wang, T.S.; Saito, M.T.

    1980-08-01

    Discussed herein is a fast computational technique employing the Floating Point Systems AP-190L array processor to analyze magnetic data for the Doublet III tokamak, a fusion research device. Interpretation of the experimental data requires the repeated solution of a free-boundary nonlinear partial differential equation, which describes the magnetohydrodynamic (MHD) equilibrium of the plasma. For this particular application, we have found that the array processor is only 1.4 and 3.5 times slower than the CDC-7600 and CRAY computers, respectively. The overhead on the host DEC-10 computer was kept to a minimum by chaining the complete Poisson solver and free-boundary algorithm into one single-load module using the vector function chainer (VFC). A simple time-sharing scheme for using the MHD code is also discussed

  16. Statistical Analysis of Langmuir Waves Associated with Type III Radio Bursts: I. Wind Observations

    Directory of Open Access Journals (Sweden)

    Vidojević S.

    2011-12-01

    Full Text Available Interplanetary electron beams are unstable in the solar wind and they generate Langmuir waves at the local plasma frequency or its harmonic. Radio observations of the waves in the range 4-256 kHz, observed in 1994-2010 with the WAVES experiment onboard the WIND spacecraft, are statistically analyzed. A subset of 36 events with Langmuir waves and type III bursts occurring at the same time was selected. After removal of the background, the remaining power spectral density is modeled by the Pearson system of probability distributions (types I, IV and VI. The Stochastic Growth Theory (SGT predicts log-normal distribution for the power spectrum density of the Langmuir waves. Our results indicate that SGT possibly requires further verification.

  17. Cost-utility analysis of chemotherapy regimens in elderly patients with stage III colon cancer.

    Science.gov (United States)

    Lairson, David R; Parikh, Rohan C; Cormier, Janice N; Chan, Wenyaw; Du, Xianglin L

    2014-10-01

    Chemotherapy prolongs survival for stage III colon cancer patients but community-level evidence on the effectiveness and cost effectiveness of treatment for elderly patients is limited. Comparisons were between patients receiving no chemotherapy, 5-fluorouracil (5-FU), and FOLFOX (5-FU + oxaliplatin). A retrospective cohort study was conducted using the Surveillance Epidemiology, and End Results (SEER)-Medicare linked database. Patients (≥65 years) with American Joint Committee on Cancer stage III colon cancer at diagnosis in 2004-2009 were identified. The 3-way propensity score matched sample included 3,534 patients. Effectiveness was measured in life-years and quality-adjusted life-years (QALYs). Medicare costs (2010 US dollars) were estimated from diagnosis until death or end of study. FOLFOX patients experienced 6.06 median life-years and 4.73 QALYs. Patients on 5-FU had 5.75 median life-years and 4.50 median QALYs, compared to 3.42 and 2.51, respectively, for the no chemotherapy patients. Average total healthcare costs ranged from US$85,422 for no chemotherapy to US$168,628 for FOLFOX. Incremental cost-effectiveness ratios (ICER) for 5-FU versus no chemotherapy were US$17,131 per life-year gained and US$20,058 per QALY gained. ICERs for FOLFOX versus 5-FU were US$139,646 per life-year gained and US$188,218 per QALY gained. Results appear to be sensitive to age, suggesting that FOLFOX performs better for patients 65-69 and 80+ years old while 5-FU appears most effective and cost effective for the age groups 70-74 and 75-79 years. FOLFOX appears more effective and cost effective than other strategies for colon cancer treatment of older patients. Results were sensitive to age, with ICERs exhibiting a U-shaped pattern.

  18. Optical analysis of a III-V-nanowire-array-on-Si dual junction solar cell.

    Science.gov (United States)

    Chen, Yang; Höhn, Oliver; Tucher, Nico; Pistol, Mats-Erik; Anttu, Nicklas

    2017-08-07

    A tandem solar cell consisting of a III-V nanowire subcell on top of a planar Si subcell is a promising candidate for next generation photovoltaics due to the potential for high efficiency. However, for success with such applications, the geometry of the system must be optimized for absorption of sunlight. Here, we consider this absorption through optics modeling. Similarly, as for a bulk dual-junction tandem system on a silicon bottom cell, a bandgap of approximately 1.7 eV is optimum for the nanowire top cell. First, we consider a simplified system of bare, uncoated III-V nanowires on the silicon substrate and optimize the absorption in the nanowires. We find that an optimum absorption in 2000 nm long nanowires is reached for a dense array of approximately 15 nanowires per square micrometer. However, when we coat such an array with a conformal indium tin oxide (ITO) top contact layer, a substantial absorption loss occurs in the ITO. This ITO could absorb 37% of the low energy photons intended for the silicon subcell. By moving to a design with a 50 nm thick, planarized ITO top layer, we can reduce this ITO absorption to 5%. However, such a planarized design introduces additional reflection losses. We show that these reflection losses can be reduced with a 100 nm thick SiO 2 anti-reflection coating on top of the ITO layer. When we at the same time include a Si 3 N 4 layer with a thickness of 90 nm on the silicon surface between the nanowires, we can reduce the average reflection loss of the silicon cell from 17% to 4%. Finally, we show that different approximate models for the absorption in the silicon substrate can lead to a 15% variation in the estimated photocurrent density in the silicon subcell.

  19. Statistical modelling of survival data with random effects h-likelihood approach

    CERN Document Server

    Ha, Il Do; Lee, Youngjo

    2017-01-01

    This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R (“frailtyHL”), while the real-world data examples together with an R package, “frailtyHL” in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to research...

  20. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  1. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  2. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-01-07

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  3. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  4. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  5. Human Retroviruses and AIDS. A compilation and analysis of nucleic acid and amino acid sequences: I--II; III--V

    Energy Technology Data Exchange (ETDEWEB)

    Myers, G.; Korber, B. [eds.] [Los Alamos National Lab., NM (United States); Wain-Hobson, S. [ed.] [Laboratory of Molecular Retrovirology, Pasteur Inst.; Smith, R.F. [ed.] [Baylor Coll. of Medicine, Houston, TX (United States). Dept. of Pharmacology; Pavlakis, G.N. [ed.] [National Cancer Inst., Frederick, MD (United States). Cancer Research Facility

    1993-12-31

    This compendium and the accompanying floppy diskettes are the result of an effort to compile and rapidly publish all relevant molecular data concerning the human immunodeficiency viruses (HIV) and related retroviruses. The scope of the compendium and database is best summarized by the five parts that it comprises: (I) HIV and SIV Nucleotide Sequences; (II) Amino Acid Sequences; (III) Analyses; (IV) Related Sequences; and (V) Database Communications. Information within all the parts is updated at least twice in each year, which accounts for the modes of binding and pagination in the compendium.

  6. Fat mass to fat-free mass ratio reference values from NHANES III using bioelectrical impedance analysis.

    Science.gov (United States)

    Xiao, J; Purcell, S A; Prado, C M; Gonzalez, M C

    2017-10-06

    Low fat-free mass (FFM) or high fat mass (FM) are abnormal body composition phenotypes associated with morbidity. These conditions in combination lead to worse health outcomes, and can be identified by a high FM/FFM ratio. Here, we developed sex, age, and body mass index (BMI) stratified, population-based FM/FFM reference values using bioelectrical impedance analysis (BIA) measurements. White, non-Hispanic individuals aged 18-90 years old with data for weight, stature and BIA resistance measures from the third National Health and Nutrition Examination Survey (NHANES) III were included. Previously validated and sex-specific BIA prediction equations were used to calculate FM and FFM. FM/FFM values were generated at 5th, 50th and 95th percentiles for each sex, age (18-39.9, 40-59.9, 60-69.9 and 70-90 years), and BMI category (underweight, normal weight, overweight, class I/II and class III obesity). A total of 6372 individuals who had estimated FM and FFM values were identified (3366 females, 3006 males). Median values of FM/FFM were 0.24 and 0.40 for young (≤39.9 years) males and females with normal BMI, and 0.34 for males and 0.59 for females who were overweight. For elderly individuals aged >70 years, median FM/FFM for males and females were respectively 0.28 and 0.45 for those with normal BMI, and 0.37 and 0.61 for those in the overweight category. These FM/FFM reference values provide information on body composition characteristics that account for age, sex and BMI, which can be useful to identify individuals at risk for body composition abnormalities. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  7. Core heat transfer analysis during a BWR LOCA simulation experiment at ROSA-III

    International Nuclear Information System (INIS)

    Yonomoto, T.; Koizumi, Y.; Tasaka, K.

    1987-01-01

    The ROSA-III test facility is a 1/424-th volumetrically scaled BWR/6 simulator with an electrically heated core to study the thermal-hydraulic response during a postulated loss-of-coolant accident (LOCA). Heat transfer analyses for 5, 15, 50 and 200% break tests were conducted to understand the basic heat transfer behavior in the core under BWR LOCA conditions and to obtain a data base of post-critical heat flux (CHF) heat transfer coefficients and quench temperature. The results show that the convective heat transfer coefficient of dried-out rods at the core midplane during a steam cooling period is less than approximately 120 W/m 2 K. It is larger than existing data measured at lower pressures during a spray cooling period. Bottom-up quench temperatures are given by a simple equations: The sum of the saturation temperature and a constant of 262 K. Then the heat transfer model in the RELAP4/MOD6/U4/J3 code was revised using the present results. The rod surface temperature behavior in the 200% break test was calculated better by using the revised model although the model is very simple. (orig.)

  8. The acute mania of King George III: A computational linguistic analysis.

    Directory of Open Access Journals (Sweden)

    Vassiliki Rentoumi

    Full Text Available We used a computational linguistic approach, exploiting machine learning techniques, to examine the letters written by King George III during mentally healthy and apparently mentally ill periods of his life. The aims of the study were: first, to establish the existence of alterations in the King's written language at the onset of his first manic episode; and secondly to identify salient sources of variation contributing to the changes. Effects on language were sought in two control conditions (politically stressful vs. politically tranquil periods and seasonal variation. We found clear differences in the letter corpus, across a range of different features, in association with the onset of mental derangement, which were driven by a combination of linguistic and information theory features that appeared to be specific to the contrast between acute mania and mental stability. The paucity of existing data relevant to changes in written language in the presence of acute mania suggests that lexical, syntactic and stylometric descriptions of written discourse produced by a cohort of patients with a diagnosis of acute mania will be necessary to support the diagnosis independently and to look for other periods of mental illness of the course of the King's life, and in other historically significant figures with similarly large archives of handwritten documents.

  9. X-ray diffraction analysis of cubic zincblende III-nitrides

    International Nuclear Information System (INIS)

    Frentrup, Martin; Lee, Lok Yi; Sahonta, Suman-Lata; Kappers, Menno J; Massabuau, Fabien; Gupta, Priti; Oliver, Rachel A; Humphreys, Colin J; Wallis, David J

    2017-01-01

    Solving the green gap problem is a key challenge for the development of future LED-based light systems. A promising approach to achieve higher LED efficiencies in the green spectral region is the growth of III-nitrides in the cubic zincblende phase. However, the metastability of zincblende GaN along with the crystal growth process often lead to a phase mixture with the wurtzite phase, high mosaicity, high densities of extended defects and point defects, and strain, which can all impair the performance of light emitting devices. X-ray diffraction (XRD) is the main characterization technique to analyze these device-relevant structural properties, as it is very cheap in comparison to other techniques and enables fast feedback times. In this review, we will describe and apply various XRD techniques to identify the phase purity in predominantly zincblende GaN thin films, to analyze their mosaicity, strain state, and wafer curvature. The different techniques will be illustrated on samples grown by metalorganic vapor phase epitaxy on pieces of 4″ SiC/Si wafers. We will discuss possible issues, which may arise during experimentation, and provide a critical view on the common theories. (topical review)

  10. Analysis of a selected sample of RR Lyrae stars in the LMC from OGLE-III

    International Nuclear Information System (INIS)

    Chen Bing-Qiu; Jiang Bi-Wei; Yang Ming

    2013-01-01

    A systematic study of RR Lyrae stars is performed using a selected sample of 655 objects in the Large Magellanic Cloud (LMC) with long-term observations and numerous measurements from the Optical Gravitational Lensing Experiment III project. The phase dispersion method and linear superposition of the harmonic oscillations are used to derive the pulsation frequency and properties of light variation. It is found that a dichotomy exists in Oosterhoff Type I and Oosterhoff Type II for RR Lyrae stars in the LMC. Due to our strict criteria for identifying a frequency, a lower limit for the incidence rate of Blazhko modulation in the LMC is estimated in various subclasses of RR Lyrae stars. For fundamental-mode RR Lyrae stars, the rate of 7.5% is smaller than the previous result. In the case of the first-overtone RR Lyrae variables, the rate of 9.1% is relatively high. In addition to the Blazhko variables, 15 objects are identified to pulsate in the fundamental/first-overtone double mode. Furthermore, four objects show a period ratio around 0.6, which makes them very likely to be rare pulsators in the fundamental/second-overtone double mode. (research papers)

  11. The occlusal imaging and analysis system by T-scan III in tinnitus patients.

    Science.gov (United States)

    Di Berardino, Federica; Filipponi, Eliana; Schiappadori, Massimo; Forti, Stella; Zanetti, Diego; Cesarani, Antonio

    2016-04-01

    Several studies have demonstrated that the prevalence of temporomandibular disorders (TMDs) in tinnitus patients ranges from 7% to 95%, and it is reported in literature that idiopathic tinnitus patients should be referred to a dentist to define whether or not the tinnitus is associated with TMD. However, the possible pathophysiological relation between TMDs and tinnitus is not generally investigated in clinical practice. The patterns and forces of occlusal contacts have been studied by means of T-scan III in 47 tinnitus patients (23 suffering from idiopathic tinnitus and 24 affected by Ménière disease [MD]) and 13 healthy subjects. The center of force target was offset in the opposite direction in 15/23 idiopathic tinnitus and in 7/24 MD patients (p = 0.026). No significant variation was found in the occlusal force. Our data suggest that a diagnostic screening method for occlusal stability in the intercuspidal position might be clinically useful in idiopathic tinnitus patients. Copyright © 2016 Chang Gung University. Published by Elsevier B.V. All rights reserved.

  12. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  13. Estimating likelihood of future crashes for crash-prone drivers

    OpenAIRE

    Subasish Das; Xiaoduan Sun; Fan Wang; Charles Leboeuf

    2015-01-01

    At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the a...

  14. Waste retrieval sluicing system vapor sampling and analysis plan for evaluation of organic emissions, process test phase III

    International Nuclear Information System (INIS)

    SASAKI, L.M.

    1999-01-01

    This sampling and analysis plan identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for vapor samples obtained to address vapor issues related to the sluicing of tank 241-C-106. Sampling will be performed in accordance with Waste Retrieval Sluicing System Emissions Collection Phase III (Jones 1999) and Process Test Plan Phase III, Waste Retrieval Sluicing System Emissions Collection (Powers 1999). Analytical requirements include those specified in Request for Ecology Concurrence on Draft Strategy/Path Forward to Address Concerns Regarding Organic Emissions from C-106 Sluicing Activities (Peterson 1998). The Waste Retrieval Sluicing System was installed to retrieve and transfer high-heat sludge from tank 241-C-106 to tank 241-AY-102, which is designed for high-heat waste storage. During initial sluicing of tank 241-C-106 in November 1998, operations were halted due to detection of unexpected high volatile organic compounds in emissions that exceeded regulatory permit limits. Several workers also reported smelling sharp odors and throat irritation. Vapor grab samples from the 296-C-006 ventilation system were taken as soon as possible after detection; the analyses indicated that volatile and semi-volatile organic compounds were present. In December 1998, a process test (phase I) was conducted in which the pumps in tanks 241-C-106 and 241-AY-102 were operated and vapor samples obtained to determine constituents that may be present during active sluicing of tank 241-C-106. The process test was suspended when a jumper leak was detected. On March 7, 1999, phase I1 of the process test was performed; the sluicing system was operated for approximately 7 hours and was ended using the controlled shutdown method when the allowable amount of solids were transferred to 241-AY-102. The phase II test was successful, however, further testing is required to obtain vapor samples at higher emission levels

  15. III SBC Guidelines on the Analysis and Issuance of Electrocardiographic Reports - Executive Summary

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Pastore

    Full Text Available Abstract The third version of the guidelines covers recently described topics, such as ion channel diseases, acute ischemic changes, the electrocardiogram in athletes, and analysis of ventricular repolarization. It sought to revise the criteria for overloads, conduction disorders, and analysis of data for internet transmission.

  16. Paroxysmal atrial fibrillation prediction based on HRV analysis and non-dominated sorting genetic algorithm III.

    Science.gov (United States)

    Boon, K H; Khalil-Hani, M; Malarvili, M B

    2018-01-01

    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Iron(III) and manganese(II) substituted hydroxyapatite nanoparticles: Characterization and cytotoxicity analysis

    International Nuclear Information System (INIS)

    Li Yan; Nam, C T; Ooi, C P

    2009-01-01

    Calcium hydroxyapatite (HA) is the main inorganic component of natural bones and can bond to bone directly in vivo. Thus HA is widely used as coating material on bone implants due to its good osteoconductivity and osteoinductivity. Metal ions doped HA have been used as catalyst or absorbents since the ion exchange method has introduced new properties in HA which are inherent to the metal ions. For example, Mn 2+ ions have the potential to increase cell adhesion while Fe 3+ ions have magnetic properties. Here, Fe(III) substituted hydroxyapatite (Fe-HA) and Mn(II) substituted hydroxyapatite (Mn-HA) were produced by wet chemical method coupled with ion exchange mechanism. Compared with pure HA, the colour of both Fe-HA and Mn-HA nanoparticles changed from white to brown and pink respectively. The intensity of the colours increased with increasing substitution concentrations. XRD patterns showed that all samples were single phased HA while the FTIR spectra revealed all samples possessed the characteristic phosphate and hydroxyl adsorption bands of HA. However, undesired adsorption bands of carbonate substitution (B-type carbonated HA) and H 2 O were also detected, which was reasonable since the wet chemical method was used in the synthesis of these nanoparticles. FESEM images showed all samples were elongated spheroids with small size distribution and of around 70 nm, regardless of metal ion substitution concentrations. EDX spectra showed the presence of Fe and Mn and ICP-AES results revealed all metal ion substituted HA were non-stoichiometric (Ca/P atomic ratio deviates from 1.67). Fe-HA nanoparticles were paramagnetic and the magnetic susceptibility increased with the increase of Fe content. Based on the extraction assay for cytotoxicity test, both Fe-HA and Mn-HA displayed non-cytotoxicity to osteoblast.

  18. Orthodontic camouflage versus orthognathic surgery for class III deformity: comparative cephalometric analysis.

    Science.gov (United States)

    Martinez, P; Bellot-Arcís, C; Llamas, J M; Cibrian, R; Gandia, J L; Paredes-Gallardo, V

    2017-04-01

    The objective of this study was to compare different cephalometric variables in adult patients with class III malocclusions before and after treatment, in order to determine which variables are indicative of orthodontic camouflage or orthognathic surgery. The cases of 156 adult patients were assessed: 77 treated with orthodontic camouflage and 79 treated with orthodontics and orthognathic surgery. The following cephalometric variables were measured on pre-treatment (T1) and post-treatment (T2) lateral cephalograms: sella-nasion-A-point (SNA), sella-nasion-B-point (SNB), and A-point-nasion-B-point (ANB) angles, Wits appraisal, facial axis angle, mandibular plane angle, upper and lower incisor inclination, and inter-incisal angle. There were statistically significant differences in cephalometric variables before and after treatment between the two groups. The percentage of normal pre-treatment measurements in the camouflage orthodontics group was 30.7%, which worsened slightly to 28.4% post-treatment. However in the group receiving surgery, this was 24.5% pre-treatment, improving to 33.5% after surgery. SNA, SNB, Wits appraisal, lower incisor inclination, and inter-incisal angle showed differences between the two groups before and after treatment. Wits appraisal, lower incisor inclination, and inter-incisal angle were indicative of one or other treatment. Upper and lower incisor decompensation in both groups did not reach ideal values, which impeded complete skeletal correction in 52% of surgical cases. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Novel three dimensional position analysis of the mandibular foramen in patients with skeletal class III mandibular prognathism

    International Nuclear Information System (INIS)

    Kang, Sang Hoon; Kim, Yeon Ho; Won, Yu Jin; Kim, Moon Key

    2016-01-01

    To analyze the relative position of the mandibular foramina (MnFs) in patients diagnosed with skeletal class III malocclusion. Computed tomography (CT) images were collected from 85 patients. The vertical lengths of each anatomic point from the five horizontal planes passing through the MnF were measured at the coronoid process, sigmoid notch, condyle, and the gonion. The distance from the anterior ramus point to the posterior ramus point on the five horizontal planes was designated the anteroposterior horizontal distance of the ramus for each plane. The perpendicular distance from each anterior ramus point to each vertical plane through the MnF was designated the horizontal distance from the anterior ramus to the Mn F. The horizontal and vertical positions were examined by regression analysis. Regression analysis showed the heights of the coronoid process, sigmoid notch, and condyle for the five horizontal planes were significantly related to the height of the MnF, with the highest significance associated with the MnF-mandibular plane (coefficients of determination (R2): 0.424, 0.597, and 0.604, respectively). The horizontal anteroposterior length of the ramus and the distance from the anterior ramus point to the MnF were significant by regression analysis. The relative position of the MnF was significantly related to the vertical heights of the sigmoid notch, coronoid process, and condyle as well as to the horizontal anteroposterior length of the ascending ramus. These findings should be clinically useful for patients with skeletal class III mandibular prognathism

  20. Novel three dimensional position analysis of the mandibular foramen in patients with skeletal class III mandibular prognathism

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sang Hoon; Kim, Yeon Ho; Won, Yu Jin; Kim, Moon Key [Dept. of Oral and Maxillofacial Surgery, National Health Insurance Service Ilsan Hospital, Goyang (Korea, Republic of)

    2016-06-15

    To analyze the relative position of the mandibular foramina (MnFs) in patients diagnosed with skeletal class III malocclusion. Computed tomography (CT) images were collected from 85 patients. The vertical lengths of each anatomic point from the five horizontal planes passing through the MnF were measured at the coronoid process, sigmoid notch, condyle, and the gonion. The distance from the anterior ramus point to the posterior ramus point on the five horizontal planes was designated the anteroposterior horizontal distance of the ramus for each plane. The perpendicular distance from each anterior ramus point to each vertical plane through the MnF was designated the horizontal distance from the anterior ramus to the Mn F. The horizontal and vertical positions were examined by regression analysis. Regression analysis showed the heights of the coronoid process, sigmoid notch, and condyle for the five horizontal planes were significantly related to the height of the MnF, with the highest significance associated with the MnF-mandibular plane (coefficients of determination (R2): 0.424, 0.597, and 0.604, respectively). The horizontal anteroposterior length of the ramus and the distance from the anterior ramus point to the MnF were significant by regression analysis. The relative position of the MnF was significantly related to the vertical heights of the sigmoid notch, coronoid process, and condyle as well as to the horizontal anteroposterior length of the ascending ramus. These findings should be clinically useful for patients with skeletal class III mandibular prognathism.

  1. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  2. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  3. Subtracting and Fitting Histograms using Profile Likelihood

    CERN Document Server

    D'Almeida, F M L

    2008-01-01

    It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.

  4. Computed tomographic analysis of temporal maxillary stability and pterygomaxillary generate formation following pediatric Le Fort III distraction advancement.

    Science.gov (United States)

    Hopper, Richard A; Sandercoe, Gavin; Woo, Albert; Watts, Robyn; Kelley, Patrick; Ettinger, Russell E; Saltzman, Babette

    2010-11-01

    Le Fort III distraction requires generation of bone in the pterygomaxillary region. The authors performed retrospective digital analysis on temporal fine-cut computed tomographic images to quantify both radiographic evidence of pterygomaxillary region bone formation and relative maxillary stability. Fifteen patients with syndromic midface hypoplasia were included in the study. The average age of the patients was 8.7 years; 11 had either Crouzon or Apert syndrome. The average displacement of the maxilla during distraction was 16.2 mm (range, 7 to 31 mm). Digital analysis was performed on fine-cut computed tomographic scans before surgery, at device removal, and at annual follow-up. Seven patients also had mid-consolidation computed tomographic scans. Relative maxillary stability and density of radiographic bone in the pterygomaxillary region were calculated between each scan. There was no evidence of clinically significant maxillary relapse, rotation, or growth between the end of consolidation and 1-year follow-up, other than a relatively small 2-mm subnasal maxillary vertical growth. There was an average radiographic ossification of 0.5 mm/mm advancement at the time of device removal, with a 25th percentile value of 0.3 mm/mm. The time during consolidation that each patient reached the 25th percentile of pterygomaxillary region bone density observed in this series of clinically stable advancements ranged from 1.3 to 9.8 weeks (average, 3.7 weeks). There was high variability in the amount of bone formed in the pterygomaxillary region associated with clinical stability of the advanced Le Fort III segment. These data suggest that a subsection of patients generate the minimal amount of pterygomaxillary region bone formation associated with advancement stability as early as 4 weeks into consolidation.

  5. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  6. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  7. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal Phase III

    Science.gov (United States)

    2015-04-30

    ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Lexical Link Analysis Application: Improving Web Service to Acquisition...2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis Application: Improving Web Service...processes. Lexical Link Analysis (LLA) can help, by applying automation to reveal and depict???to decisionmakers??? the correlations, associations, and

  8. Direct methods of soil-structure interaction analysis for earthquake loadings (III)

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, J. B.; Lee, S. R.; Kim, J. M.; Park, K. R.; Choi, J. S.; Oh, S. B. [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1995-06-15

    In this study, direct methods for seismic analysis of soil-structure interaction system have been studied. A computer program 'KIESSI-QK' has been developed based on the finite element technique coupled with infinite element formulation. A substructuring method isolating the displacement solution of near field soil region was adopted. The computer program developed was verified using a free-field site response problem. The post-correlation analysis for the forced vibration tests after backfill of the Hualien LSST project has been carried out. The seismic analyses for the Hualien and Lotung LSST structures have been also performed utilizing the developed computer program 'KIESSI-QK'.

  9. Direct methods of soil-structure interaction analysis for earthquake loadings (III)

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, J B; Lee, S R; Kim, J M; Park, K R; Choi, J S; Oh, S B [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1995-06-15

    In this study, direct methods for seismic analysis of soil-structure interaction system have been studied. A computer program 'KIESSI-QK' has been developed based on the finite element technique coupled with infinite element formulation. A substructuring method isolating the displacement solution of near field soil region was adopted. The computer program developed was verified using a free-field site response problem. The post-correlation analysis for the forced vibration tests after backfill of the Hualien LSST project has been carried out. The seismic analyses for the Hualien and Lotung LSST structures have been also performed utilizing the developed computer program 'KIESSI-QK'.

  10. Effects of tibolone on fibrinogen and antithrombin III: A systematic review and meta-analysis of controlled trials.

    Science.gov (United States)

    Bała, Małgorzata; Sahebkar, Amirhossein; Ursoniu, Sorin; Serban, Maria-Corina; Undas, Anetta; Mikhailidis, Dimitri P; Lip, Gregory Y H; Rysz, Jacek; Banach, Maciej

    2017-10-01

    Tibolone is a synthetic steroid with estrogenic, androgenic and progestogenic activity, but the evidence regarding its effects on fibrinogen and antithrombin III (ATIII) has not been conclusive. We assessed the impact of tibolone on fibrinogen and ATIII through a systematic review and meta-analysis of available randomized controlled trials (RCTs). The search included PUBMED, Web of Science, Scopus, and Google Scholar (up to January 31st, 2016) to identify controlled clinical studies investigating the effects of oral tibolone treatment on fibrinogen and ATIII. Overall, the impact of tibolone on plasma fibrinogen concentrations was reported in 10 trials comprising 11 treatment arms. Meta-analysis did not suggest a significant reduction of fibrinogen levels following treatment with tibolone (WMD: -5.38%, 95% CI: -11.92, +1.16, p=0.107). This result was robust in the sensitivity analysis and not influenced after omitting each of the included studies from meta-analysis. When the studies were categorized according to the duration of treatment, there was no effect in the subsets of trials lasting either analysis. There was no differential effect of tibolone on plasma ATIII concentrations in trials with either analysis, meta-regression did not suggest any significant association between the changes in plasma concentrations of fibrinogen (slope: +0.40; 95% CI: -0.39, +1.19; p=0.317) and ATIII (slope: -0.17; 95% CI: -0.54, +0.20; p=0.374) with duration of treatment. In conclusion, meta-analysis did not suggest a significant reduction of fibrinogen and ATIII levels following treatment with tibolone. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. SLSF loop handling system. Volume III. AISC code evaluations and analysis of critical attachments

    International Nuclear Information System (INIS)

    Ahmed, H.; Cowie, A.; Malek, R.A.; Rafer, A.; Ma, D.; Tebo, F.

    1978-10-01

    SLSF loop handling system was analyzed for deadweight and postulated dynamic loading conditions using a linear elastic static equivalent method of stress analysis. Stress computations of Cradle and critical attachments per AISC Code guidelines are presented. HFEF is credited with in-depth review of initial phase of work

  12. BEMUSE Phase III Report - Uncertainty and Sensitivity Analysis of the LOFT L2-5 Test

    International Nuclear Information System (INIS)

    Bazin, P.; Crecy, A. de; Glaeser, H.; Skorek, T.; Joucla, J.; Probst, P.; Chung, B.; Oh, D.Y.; Kyncl, M.; Pernica, R.; Macek, J.; Meca, R.; Macian, R.; D'Auria, F.; Petruzzi, A.; Perez, M.; Reventos, F.; Fujioka, K.

    2007-02-01

    This report summarises the various contributions (ten participants) for phase 3 of BEMUSE: Uncertainty and Sensitivity Analyses of the LOFT L2-5 experiment, a Large-Break Loss-of-Coolant-Accident (LB-LOCA). For this phase, precise requirements step by step were provided to the participants. Four main parts are defined, which are: 1. List and uncertainties of the input uncertain parameters. 2. Uncertainty analysis results. 3. Sensitivity analysis results. 4. Improved methods, assessment of the methods (optional). 5% and 95% percentiles have to be estimated for 6 output parameters, which are of two kinds: 1. Scalar output parameters (First Peak Cladding Temperature (PCT), Second Peak Cladding Temperature, Time of accumulator injection, Time of complete quenching); 2. Time trends output parameters (Maximum cladding temperature, Upper plenum pressure). The main lessons learnt from phase 3 of the BEMUSE programme are the following: - for uncertainty analysis, all the participants use a probabilistic method associated with the use of Wilks' formula, except for UNIPI with its CIAU method (Code with the Capability of Internal Assessment of Uncertainty). Use of both methods has been successfully mastered. - Compared with the experiment, the results of uncertainty analysis are good on the whole. For example, for the cladding temperature-type output parameters (1. PCT, 2. PCT, time of complete quenching, maximum cladding temperature), 8 participants out of 10 find upper and lower bounds which envelop the experimental data. - Sensitivity analysis has been successfully performed by all the participants using the probabilistic method. All the used influence measures include the range of variation of the input parameters. Synthesis tables of the most influential phenomena and parameters have been plotted and participants will be able to use them for the continuation of the BEMUSE programme

  13. Crash test rating and likelihood of major thoracoabdominal injury in motor vehicle crashes: the new car assessment program side-impact crash test, 1998-2010.

    Science.gov (United States)

    Figler, Bradley D; Mack, Christopher D; Kaufman, Robert; Wessells, Hunter; Bulger, Eileen; Smith, Thomas G; Voelzke, Bryan

    2014-03-01

    The National Highway Traffic Safety Administration's New Car Assessment Program (NCAP) implemented side-impact crash testing on all new vehicles since 1998 to assess the likelihood of major thoracoabdominal injuries during a side-impact crash. Higher crash test rating is intended to indicate a safer car, but the real-world applicability of these ratings is unknown. Our objective was to determine the relationship between a vehicle's NCAP side-impact crash test rating and the risk of major thoracoabdominal injury among the vehicle's occupants in real-world side-impact motor vehicle crashes. The National Automotive Sampling System Crashworthiness Data System contains detailed crash and injury data in a sample of major crashes in the United States. For model years 1998 to 2010 and crash years 1999 to 2010, 68,124 occupants were identified in the Crashworthiness Data System database. Because 47% of cases were missing crash severity (ΔV), multiple imputation was used to estimate the missing values. The primary predictor of interest was the occupant vehicle's NCAP side-impact crash test rating, and the outcome of interest was the presence of major (Abbreviated Injury Scale [AIS] score ≥ 3) thoracoabdominal injury. In multivariate analysis, increasing NCAP crash test rating was associated with lower likelihood of major thoracoabdominal injury at high (odds ratio [OR], 0.8; 95% confidence interval [CI], 0.7-0.9; p NCAP side-impact crash test rating is associated with a lower likelihood of major thoracoabdominal trauma. Epidemiologic study, level III.

  14. Fall-Risk-Increasing Drugs: A Systematic Review and Meta-analysis: III. Others.

    Science.gov (United States)

    Seppala, Lotta J; van de Glind, Esther M M; Daams, Joost G; Ploegmakers, Kimberley J; de Vries, Max; Wermelink, Anne M A T; van der Velde, Nathalie

    2018-04-01

    The use of psychotropic medication and cardiovascular medication has been associated with an increased risk of falling. However, other frequently prescribed medication classes are still under debate as potential risk factors for falls in the older population. The aim of this systematic review and meta-analysis is to evaluate the associations between fall risk and nonpsychotropic and noncardiovascular medications. A systematic review and meta-analysis. A search was conducted in Medline, PsycINFO, and Embase. Key search concepts were "falls," "aged," "medication," and "causality." Studies were included that investigated nonpsychotropic and noncardiovascular medications as risk factors for falls in participants ≥60 years or participants with a mean age ≥70 years. A meta-analysis was performed using the generic inverse variance method, pooling unadjusted and adjusted odds ratio (OR) estimates separately. In a qualitative synthesis, 281 studies were included. The results of meta-analysis using adjusted data were as follows (a pooled OR [95% confidence interval]): analgesics, 1.42 (0.91-2.23); nonsteroidal anti-inflammatory drugs (NSAIDs), 1.09 (0.96-1.23); opioids, 1.60 (1.35-1.91); anti-Parkinson drugs, 1.54 (0.99-2.39); antiepileptics, 1.55 (1.25-1.92); and polypharmacy, 1.75 (1.27-2.41). Most of the meta-analyses resulted in substantial heterogeneity that did not disappear after stratification for population and setting in most cases. In a descriptive synthesis, consistent associations with falls were observed for long-term proton pump inhibitor use and opioid initiation. Laxatives showed inconsistent associations with falls (7/20 studies showing a positive association). Opioid and antiepileptic use and polypharmacy were significantly associated with increased risk of falling in the meta-analyses. Long-term use of proton pump inhibitors and opioid initiation might increase the fall risk. Future research is necessary because the causal role of some medication

  15. Evaluation of conservatisms and environmental effects in ASME Code, Section III, Class 1 fatigue analysis

    International Nuclear Information System (INIS)

    Deardorff, A.F.; Smith, J.K.

    1994-08-01

    This report documents the results of a study regarding the conservatisms in ASME Code Section 3, Class 1 component fatigue evaluations and the effects of Light Water Reactor (LWR) water environments on fatigue margins. After review of numerous Class 1 stress reports, it is apparent that there is a substantial amount of conservatism present in many existing component fatigue evaluations. With little effort, existing evaluations could be modified to reduce the overall predicted fatigue usage. Areas of conservatism include design transients considerably more severe than those experienced during service, conservative grouping of transients, conservatisms that have been removed in later editions of Section 3, bounding heat transfer and stress analysis, and use of the ''elastic-plastic penalty factor'' (K 3 ). Environmental effects were evaluated for two typical components that experience severe transient thermal cycling during service, based on both design transients and actual plant data. For all reasonable values of actual operating parameters, environmental effects reduced predicted margins, but fatigue usage was still bounded by the ASME Section 3 fatigue design curves. It was concluded that the potential increase in predicted fatigue usage due to environmental effects should be more than offset by decreases in predicted fatigue usage if re-analysis were conducted to reduce the conservatisms that are present in existing component fatigue evaluations

  16. Safety analysis of RA Reactor operation I-III; Analiza sigurnosti rada Reaktora RA I - III, IZ-213-0322-1963

    Energy Technology Data Exchange (ETDEWEB)

    Raisic, N [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1963-02-15

    This safety analysis report covers the following three parts: Technical and operational characteristics of the RA reactor; Accidents analysis; and Environmental effects of the maximum possible accident. [Serbo-Croat] Ovaj izvestaj o analizi sigurnosti rada reaktora RA sastoji se od tri dela: Tehnicke i pogonske karakteristike reaktora RA; Analiza akcidenta; i Posledice maksimalno moguceg akcidenta na okolinu reaktora.

  17. PEPSI deep spectra. III. Chemical analysis of the ancient planet-host star Kepler-444

    Science.gov (United States)

    Mack, C. E.; Strassmeier, K. G.; Ilyin, I.; Schuler, S. C.; Spada, F.; Barnes, S. A.

    2018-04-01

    Context. With the Large Binocular Telescope (LBT), we obtained a spectrum with PEPSI, its new optical high-resolution échelle spectrograph. The spectrum has very high resolution and a high signal-to-noise (S/N) and is of the K0V host Kepler-444, which is known to host five sub-Earth-sized rocky planets. The spectrum has a resolution of R ≈ 250 000, a continuous wavelength coverage from 4230 Å to 9120 Å, and an S/N between 150-550:1 (blue to red). Aim. We performed a detailed chemical analysis to determine the photospheric abundances of 18 chemical elements. These were used to place constraints on the bulk composition of the five rocky planets. Methods: Our spectral analysis employs the equivalent-width method for most of our spectral lines, but we used spectral synthesis to fit a small number of lines that required special care. In both cases, we derived our abundances using the MOOG spectral analysis package and Kurucz model atmospheres. Results: We find no correlation between elemental abundance and condensation temperature among the refractory elements (TC > 950 K). In addition, using our spectroscopic stellar parameters and isochrone fitting, we find an age of 10 ± 1.5 Gyr, which is consistent with the asteroseismic age of 11 ± 1 Gyr. Finally, from the photospheric abundances of Mg, Si, and Fe, we estimate that the typical Fe-core mass fraction for the rocky planets in the Kepler-444 system is approximately 24%. Conclusions: If our estimate of the Fe-core mass fraction is confirmed by more detailed modeling of the disk chemistry and simulations of planet formation and evolution in the Kepler-444 system, then this would suggest that rocky planets in more metal-poor and α-enhanced systems may tend to be less dense than their counterparts of comparable size in more metal-rich systems. Based on data acquired with PEPSI using the Large Binocular Telescope (LBT). The LBT is an international collaboration among institutions in the United States, Italy, and

  18. FLICA III. A digital computer program for thermal-hydraulic analysis of reactors and experimental loops

    International Nuclear Information System (INIS)

    Plas, Roger.

    1975-05-01

    This computer program describes the flow and heat transfer in steady and transient state in two-phase flows. It is the present stage of the evolution about FLICA, FLICA II and FLICA II B codes which have been used and developed at CEA for the thermal-hydraulic analysis of reactors and experimental loops with heating rod bundles. In the mathematical model all the significant terms of the fundamental hydrodynamic equations are taken into account with the approximations of turbulent viscosity and conductivity. The two-phase flow is calculated by the homogeneous model with slip. In the flow direction an implicit resolution scheme is available, which make possible to study partial or total flow blockage, with upstream and downstream effects. A special model represents the helical wire effects in out-of pile experimental rod bundles [fr

  19. Real time analysis with the upgraded LHCb trigger in Run III

    Science.gov (United States)

    Szumlak, Tomasz

    2017-10-01

    The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1.1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1.1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real-time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.

  20. Planetary Candidates Observed by Kepler, III: Analysis of the First 16 Months of Data

    Energy Technology Data Exchange (ETDEWEB)

    Batalha, Natalie M.; /San Jose State U.; Rowe, Jason F.; /NASA, Ames; Bryson, Stephen T.; /NASA, Ames; Barclay, Thomas; /NASA, Ames; Burke, Christopher J.; /NASA, Ames; Caldwell, Douglas A.; /NASA, Ames; Christiansen, Jessie L.; /NASA, Ames; Mullally, Fergal; /NASA, Ames; Thompson, Susan E.; /NASA, Ames; Brown, Timothy M.; /Las Cumbres Observ.; Dupree, Andrea K.; /Harvard-Smithsonian Ctr. Astrophys. /UC, Santa Cruz

    2012-02-01

    New transiting planet candidates are identified in sixteen months (May 2009 - September 2010) of data from the Kepler spacecraft. Nearly five thousand periodic transit-like signals are vetted against astrophysical and instrumental false positives yielding 1091 viable new planet candidates, bringing the total count up to over 2,300. Improved vetting metrics are employed, contributing to higher catalog reliability. Most notable is the noise-weighted robust averaging of multiquarter photo-center offsets derived from difference image analysis which identifies likely background eclipsing binaries. Twenty-two months of photometry are used for the purpose of characterizing each of the new candidates. Ephemerides (transit epoch, T{sub 0}, and orbital period, P) are tabulated as well as the products of light curve modeling: reduced radius (R{sub P}/R{sub {star}}), reduced semi-major axis (d/R{sub {star}}), and impact parameter (b). The largest fractional increases are seen for the smallest planet candidates (197% for candidates smaller than 2R{sub {circle_plus}} compared to 52% for candidates larger than 2R{sub {circle_plus}}) and those at longer orbital periods (123% for candidates outside of 50 day orbits versus 85% for candidates inside of 50 day orbits). The gains are larger than expected from increasing the observing window from thirteen months (Quarter 1 - Quarter 5) to sixteen months (Quarter 1 - Quarter 6). This demonstrates the benefit of continued development of pipeline analysis software. The fraction of all host stars with multiple candidates has grown from 17% to 20%, and the paucity of short-period giant planets in multiple systems is still evident. The progression toward smaller planets at longer orbital periods with each new catalog release suggests that Earth-size planets in the Habitable Zone are forthcoming if, indeed, such planets are abundant.

  1. Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis

    Science.gov (United States)

    Putnam, J.; Somers, J.; Wells, J.; Newby, N.; Currie-Gregg, N.; Lawrence, C.

    2016-01-01

    Introduction: In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.

  2. Richard III

    DEFF Research Database (Denmark)

    Lauridsen, Palle Schantz

    2017-01-01

    Kort analyse af Shakespeares Richard III med fokus på, hvordan denne skurk fremstilles, så tilskuere (og læsere) langt henad vejen kan føle sympati med ham. Med paralleller til Netflix-serien "House of Cards"......Kort analyse af Shakespeares Richard III med fokus på, hvordan denne skurk fremstilles, så tilskuere (og læsere) langt henad vejen kan føle sympati med ham. Med paralleller til Netflix-serien "House of Cards"...

  3. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  4. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  5. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef M.

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  6. BEAN 2.0: an integrated web resource for the identification and functional analysis of type III secreted effectors.

    Science.gov (United States)

    Dong, Xiaobao; Lu, Xiaotian; Zhang, Ziding

    2015-01-01

    Gram-negative pathogenic bacteria inject type III secreted effectors (T3SEs) into host cells to sabotage their immune signaling networks. Because T3SEs constitute a meeting-point of pathogen virulence and host defense, they are of keen interest to host-pathogen interaction research community. To accelerate the identification and functional understanding of T3SEs, we present BEAN 2.0 as an integrated web resource to predict, analyse and store T3SEs. BEAN 2.0 includes three major components. First, it provides an accurate T3SE predictor based on a hybrid approach. Using independent testing data, we show that BEAN 2.0 achieves a sensitivity of 86.05% and a specificity of 100%. Second, it integrates a set of online sequence analysis tools. Users can further perform functional analysis of putative T3SEs in a seamless way, such as subcellular location prediction, functional domain scan and disorder region annotation. Third, it compiles a database covering 1215 experimentally verified T3SEs and constructs two T3SE-related networks that can be used to explore the relationships among T3SEs. Taken together, by presenting a one-stop T3SE bioinformatics resource, we hope BEAN 2.0 can promote comprehensive understanding of the function and evolution of T3SEs. © The Author(s) 2015. Published by Oxford University Press.

  7. Tank vapor sampling and analysis data package for tank 241-C-106 waste retrieval sluicing system process test phase III

    Energy Technology Data Exchange (ETDEWEB)

    LOCKREM, L.L.

    1999-08-13

    This data package presents sampling data and analytical results from the March 28, 1999, vapor sampling of Hanford Site single-shell tank 241-C-106 during active sluicing. Samples were obtained from the 296-C-006 ventilation system stack and ambient air at several locations. Characterization Project Operations (CPO) was responsible for the collection of all SUMMATM canister samples. The Special Analytical Support (SAS) vapor team was responsible for the collection of all triple sorbent trap (TST), sorbent tube train (STT), polyurethane foam (PUF), and particulate filter samples collected at the 296-C-006 stack. The SAS vapor team used the non-electrical vapor sampling (NEVS) system to collect samples of the air, gases, and vapors from the 296-C-006 stack. The SAS vapor team collected and analyzed these samples for Lockheed Martin Hanford Corporation (LMHC) and Tank Waste Remediation System (TWRS) in accordance with the sampling and analytical requirements specified in the Waste Retrieval Sluicing System Vapor Sampling and Analysis Plan (SAP) for Evaluation of Organic Emissions, Process Test Phase III, HNF-4212, Rev. 0-A, (LMHC, 1999). All samples were stored in a secured Radioactive Materials Area (RMA) until the samples were radiologically released and received by SAS for analysis. The Waste Sampling and Characterization Facility (WSCF) performed the radiological analyses. The samples were received on April 5, 1999.

  8. An analysis of the proposed MITR-III core to establish thermal-hydraulic limits at 10 MW. Final report

    International Nuclear Information System (INIS)

    Harling, O.K.; Lanning, D.D.; Bernard, J.A.; Meyer, J.E.; Henry, A.F.

    1997-01-01

    The 5 MW Massachusetts Institute of Technology Research Reactor (MITR-II) is expected to operate under a new license beginning in 1999. Among the options being considered is an upgrade in the heat removal system to allow operation at 10 MW. The purpose of this study is to predict the Limiting Safety System Settings and Safety Limits for the upgraded reactor (MITR-III). The MITR Multi-Channel Analysis Code was written to analyze the response of the MITR system to a series of anticipated transients in order to determine the Limiting Safety System Settings and Safety Limits under various operating conditions. The MIT Multi-Channel Analysis Code models the primary and secondary systems, with special emphasis placed on analyzing the thermal-hydraulic conditions in the core. The code models each MITR fuel element explicitly in order to predict the behavior of the system during flow instabilities. The results of the code are compared to experimental data from MITR-II and other sources. New definitions are suggested for the Limiting Safety System Settings and Safety Limits. MITR Limit Diagrams are included for three different heat removal system configurations. It is concluded that safe, year-round operating at 10 MW is possible, given that the primary and secondary flow rates are both increased by approximately 40%

  9. RA reactor safety analysis, Part II - Accident analysis; Analiza sigurnosti rada Reaktora RA I-III, Deo II - Analiza akcidenta

    Energy Technology Data Exchange (ETDEWEB)

    Raisic, N; Radanovic, Lj; Milovanovic, M; Afgan, N; Kulundzic, P [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1963-02-15

    This part of the RA reactor safety analysis includes analysis of possible accidents caused by failures of the reactor devices and errors during reactor operation. Two types of accidents are analyzed: accidents resulting from uncontrolled reactivity increase, and accidents caused by interruption of cooling.

  10. ASAP3 - New Data Taking and Analysis Infrastructure for PETRA III

    International Nuclear Information System (INIS)

    Strutz, M; Gasthuber, M; Aplin, S; Dietrich, S; Kuhn, M; Ensslin, U; Smirnov, G; Lewendel, B; Guelzow, V

    2015-01-01

    Data taking and analysis infrastructures in HEP (High Energy Physics) have evolved during many years to a well known problem domain. In contrast to HEP, third generation synchrotron light sources, existing and upcoming free electron lasers are confronted with an explosion in data rates driven primarily by recent developments in 2D pixel array detectors. The next generation of detectors will produce data in the region upwards of 50 Gbytes per second. At synchrotrons, data was traditionally taken away by users following data taking using portable media. This will clearly not scale at all.We present first experiences of our new architecture and underlying services based on results taken from the resumption of data taking in April 2015. Technology choices were undertaking over a period of twelve month. The work involved a close collaboration between central IT, beamline controls, and beamline support staff. In addition a cooperation was established between DESY IT and IBM to include industrial research and development experience and skills.Our approach integrates HPC technologies for storage systems and protocols. In particular, our solution uses a single file-system instance with a multiple protocol access, while operating within a single namespace. (paper)

  11. Fish Pectoral Fin Hydrodynamics; Part III: Low Dimensional Models via POD Analysis

    Science.gov (United States)

    Bozkurttas, M.; Madden, P.

    2005-11-01

    The highly complex kinematics of the pectoral fin and the resulting hydrodynamics does not lend itself easily to analysis based on simple notions of pitching/heaving/paddling kinematics or lift/drag based propulsive mechanisms. A more inventive approach is needed to dissect the fin gait and gain insight into the hydrodynamic performance of the pectoral fin. The focus of the current work is on the hydrodynamics of the pectoral fin of a bluegill sunfish in steady forward motion. The 3D, time-dependent fin kinematics is obtained via a stereo-videographic technique. We employ proper orthogonal decomposition to extract the essential features of the fin gait and then use CFD to examine the hydrodynamics of simplified gaits synthesized from the POD modes. The POD spectrum shows that the first two, three and five POD modes capture 55%, 67%, and 80% of the motion respectively. The first three modes are in particular highly distinct: Mode-1 is a ``cupping'' motion where the fin cups forward as it is abducted; Mode-2 is an ``expansion'' motion where the fin expands to present a larger area during adduction and finally Mode-3 involves a ``spanwise flick'' of the dorsal edge of the fin. Numerical simulation of flow past fin gaits synthesized from these modes lead to insights into the mechanisms of thrust production; these are discussed in detail.

  12. Stratospheric Aerosol and Gas Experiment, SAGE III on ISS, An Earth Science Mission on the International Space Station, Schedule Risk Analysis, A Project Perspective

    Science.gov (United States)

    Bonine, Lauren

    2015-01-01

    The presentation provides insight into the schedule risk analysis process used by the Stratospheric Aerosol and Gas Experiment III on the International Space Station Project. The presentation focuses on the schedule risk analysis process highlighting the methods for identification of risk inputs, the inclusion of generic risks identified outside the traditional continuous risk management process, and the development of tailored analysis products used to improve risk informed decision making.

  13. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    International Nuclear Information System (INIS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-01-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)

  14. Type III odontoid fractures: A subgroup analysis of complex, high-energy fractures treated with external immobilization

    Directory of Open Access Journals (Sweden)

    Thomas E Niemeier

    2018-01-01

    Conclusions: Complex Type III odontoid fractures are distinctly different from low-energy injuries. In the current study, 21% of patients were unsuccessfully treated nonoperatively with external immobilization and required surgery. For complex Type III fractures, we recommend initial conservative treatment, while maintaining close monitoring throughout patient recovery and fracture union.

  15. Representational difference analysis of Neisseria meningitidis identifies sequences that are specific for the hyper-virulent lineage III clone

    NARCIS (Netherlands)

    Bart, A.; Dankert, J.; van der Ende, A.

    2000-01-01

    Neisseria meningitidis may cause meningitis and septicemia. Since the early 1980s, an increased incidence of meningococcal disease has been caused by the lineage III clone in many countries in Europe and in New Zealand. We hypothesized that lineage III meningococci have specific DNA sequences,

  16. PARDISEKO III

    International Nuclear Information System (INIS)

    Jordan, H.; Sack, C.

    1975-05-01

    This report gives a detailed description of the latest version of the PARDISEKO code, PARDISEKO III, with particular emphasis on the numerical and programming methods employed. The physical model and its relation to nuclear safety as well as a description and the results of confirming experiments are treated in detail in the Karlsruhe Nuclear Research Centre report KFK-1989. (orig.) [de

  17. AMS-C14 analysis of graphite obtained with an Automated Graphitization Equipment (AGE III) from aerosol collected on quartz filters

    Energy Technology Data Exchange (ETDEWEB)

    Solís, C.; Chávez, E.; Ortiz, M.E.; Andrade, E. [Instituto de Física, Universidad Nacional Autónoma de México, 04510 México D.F. (Mexico); Ortíz, E. [Universidad Autónoma Metropolitana, Unidad Azcapotzalco, México D.F. (Mexico); Szidat, S. [Department of Chemistry and Biochemistry, University of Bern, Freiestrasse 3, CH-3012 Bern (Switzerland); Paul Scherrer Institut (PSI), CH-5232 Villigen (Switzerland); Wacker, L. [Laboratory of Ion Physics, ETH, Honggerberg, Zurich (Switzerland)

    2015-10-15

    AMS-{sup 14}C applications often require the analysis of small samples. Such is the case of atmospheric aerosols where frequently only a small amount of sample is available. The ion beam physics group at the ETH, Zurich, has designed an Automated Graphitization Equipment (AGE III) for routine graphite production for AMS analysis from organic samples of approximately 1 mg. In this study, we explore the potential use of the AGE III for graphitization of particulate carbon collected in quartz filters. In order to test the methodology, samples of reference materials and blanks with different sizes were prepared in the AGE III and the graphite was analyzed in a MICADAS AMS (ETH) system. The graphite samples prepared in the AGE III showed recovery yields higher than 80% and reproducible {sup 14}C values for masses ranging from 50 to 300 μg. Also, reproducible radiocarbon values were obtained for aerosol filters of small sizes that had been graphitized in the AGE III. As a study case, the tested methodology was applied to PM{sub 10} samples collected in two urban cities in Mexico in order to compare the source apportionment of biomass and fossil fuel combustion. The obtained {sup 14}C data showed that carbonaceous aerosols from Mexico City have much lower biogenic signature than the smaller city of Cuernavaca.

  18. Soft tissue thin-plate spline analysis of pre-pubertal Korean and European-Americans with untreated Angle's Class III malocclusions.

    Science.gov (United States)

    Singh, G D; McNamara, J A; Lozanoff, S

    1999-01-01

    The purpose of this study was to assess soft tissue facial matrices in subjects of diverse ethnic origins with underlying dentoskeletal malocclusions. Pre-treatment lateral cephalographs of 71 Korean and 70 European-American children aged between 5 and 11 years with Angle's Class III malocclusions were traced, and 12 homologous, soft tissue landmarks digitized. Comparing mean Korean and European-American Class III soft tissue profiles, Procrustes analysis established statistical difference (P thin-plate spline analysis indicated that both affine and non-affine transformations contribute towards the total spline (deformation) of the averaged Class III soft tissue configurations. For non-affine transformations, partial warp (PW) 8 had the highest magnitude, indicating large-scale deformations visualized as labio-mental protrusion, predominantly. In addition, PW9, PW4, and PW5 also had high magnitudes, demonstrating labio-mental vertical compression and antero-posterior compression of the lower labio-mental soft tissues. Thus, Korean children with Class III malocclusions demonstrate antero-posterior and vertical deformations of the labio-mental soft tissue complex with respect to their European-American counterparts. Morphological heterogeneity of the soft tissue integument in subjects of diverse ethnic origin may obscure the underlying skeletal morphology, but the soft tissue integument appears to have minimal ontogenetic association with Class III malocclusions.

  19. Capture programs, analysis, data graphication for the study of the thermometry of the TRIGA Mark III reactor core; Programas de captura, analisis y graficado de datos para el estudio de la termometria del nucleo del reactor TRIGA Mark III

    Energy Technology Data Exchange (ETDEWEB)

    Paredes G, L.C

    1991-05-15

    This document covers the explanation of the capture programs, analysis and graphs of the data obtained during the measurement of the temperatures of the instrumented fuel element of the TRIGA Mark III reactor and of the coolant one near to this fuel, using the conversion card from Analogic to Digital of 'Data Translation', and using a signal conditioner for five temperature measurers with the help of thermo par type K, developed by the Simulation and Control of the nuclear systems management department, which gives a signal from 0 to 10 Vcd for an interval of temperature of 0 to 1000 C. (Author)

  20. On-line Speciation of Cr(III) and Cr(VI) by Flow Injection Analysis With Spectrophotometric Detection and Chemometrics

    DEFF Research Database (Denmark)

    Diacu, Elena; Andersen, Jens Enevold Thaulov

    2003-01-01

    A flow injection system has been developed, for on-line speciation. of Cr(III) and Cr(VI) by the Diphenylcarbazide (DPC) method with H2O2 oxidation followed by spectrophotometric detection at the 550 nm wavelength. The data thus obtained were subjected to a chemometric analysis (PLS), which showe...

  1. Serum and urine analysis of the aminoterminal procollagen peptide type III by radioimmunoassay with antibody Fab fragments.

    Science.gov (United States)

    Rohde, H; Langer, I; Krieg, T; Timpl, R

    1983-09-01

    A radioimmunoassay based on antibody Fab fragments was developed for the aminoterminal peptide Col 1-3 of bovine type III procollagen. This assay does not distinguish the intact aminopropeptide Col 1-3 from its globular fragment Col 1. Parallel inhibition profiles were observed with human serum and urine allowing the simultaneous quantitative determination of intact and fragmented antigens in these samples. Most of the material has a size similar to that of fragment Col 1 indicating that the aminopropeptide is degraded under physiologic conditions. The concentration of aminopeptide in normal sera was in the range 15-63 ng/ml. Daily excretion was found to be in the range 30-110 micrograms. More than 50% of patients with alcoholic hepatitis and liver cirrhosis showed elevated serum levels of aminopropeptide by the Fab assay. Elevated concentrations were detected more frequently with an antibody radioimmunoassay which measures mainly the intact form of the aminopropeptide. It is suggested that analysis of patients material by both assays could improve their diagnostic application.

  2. A three-dimensional analysis of skeletal and dental characteristics in skeletal class III patients with facial asymmetry.

    Science.gov (United States)

    Yu, Jinfeng; Hu, Yun; Huang, Mingna; Chen, Jun; Ding, Xiaoqian; Zheng, Leilei

    2018-03-15

    To evaluate the skeletal and dental characteristics in skeletal class III patients with facial asymmetry and to analyse the relationships among various parts of the stomatognathic system to provide a theoretical basis for clinical practice. Asymmetric cone-beam computed tomography data acquired from 56 patients were evaluated using Mimics 10.0 and 3-Matic software. Skeletal and dental measurements were performed to assess the three-dimensional differences between two sides. Pearson correlation analysis was used to determine the correlations among measurements. Linear measurements, such as ramal height, mandible body length, ramal height above the sigmoid notch (RHASN), maxillary height, condylar height, buccal and total cancellous bone thickness, and measurements of condylar size, were significantly larger on the nondeviated side than on the deviated side (P orthodontic camouflage has limitations and potential risks. A combination of orthodontics and orthognathic surgery may be the advisable choice in patients with a menton deviation greater than 4 mm. An important association between vertical skeletal disharmony and dental compensation was also observed.

  3. Randomized phase III trial of regorafenib in metastatic colorectal cancer: analysis of the CORRECT Japanese and non-Japanese subpopulations.

    Science.gov (United States)

    Yoshino, Takayuki; Komatsu, Yoshito; Yamada, Yasuhide; Yamazaki, Kentaro; Tsuji, Akihito; Ura, Takashi; Grothey, Axel; Van Cutsem, Eric; Wagner, Andrea; Cihon, Frank; Hamada, Yoko; Ohtsu, Atsushi

    2015-06-01

    In the international, phase III, randomized, double-blind CORRECT trial, regorafenib significantly prolonged overall survival (OS) versus placebo in patients with metastatic colorectal cancer (mCRC) that had progressed on all standard therapies. This post hoc analysis evaluated the efficacy and safety of regorafenib in Japanese and non-Japanese subpopulations in the CORRECT trial. Patients were randomized 2 : 1 to regorafenib 160 mg once daily or placebo for weeks 1-3 of each 4-week cycle. The primary endpoint was OS. Outcomes were assessed using descriptive statistics. One hundred Japanese and 660 non-Japanese patients were randomized to regorafenib (n = 67 and n = 438) or placebo (n = 33 and n = 222). Regorafenib had a consistent OS benefit in the Japanese and non-Japanese subpopulations, with hazard ratios of 0.81 (95 % confidence interval [CI] 0.43-1.51) and 0.77 (95 % CI 0.62-0.94), respectively. Regorafenib-associated hand-foot skin reaction, hypertension, proteinuria, thrombocytopenia, and lipase elevations occurred more frequently in the Japanese subpopulation than in the non-Japanese subpopulation, but were generally manageable. Regorafenib appears to have comparable efficacy in Japanese and non-Japanese subpopulations, with a manageable adverse-event profile, suggesting that this agent could potentially become a standard of care in patients with mCRC.

  4. Frequency of breast cancer with hereditary risk features in Spain: Analysis from GEICAM "El Álamo III" retrospective study.

    Science.gov (United States)

    Márquez-Rodas, Iván; Pollán, Marina; Escudero, María José; Ruiz, Amparo; Martín, Miguel; Santaballa, Ana; Martínez Del Prado, Purificación; Batista, Norberto; Andrés, Raquel; Antón, Antonio; Llombart, Antonio; Fernandez Aramburu, Antonio; Adrover, Encarnación; González, Sonia; Seguí, Miguel Angel; Calvo, Lourdes; Lizón, José; Rodríguez Lescure, Álvaro; Ramón Y Cajal, Teresa; Llort, Gemma; Jara, Carlos; Carrasco, Eva; López-Tarruella, Sara

    2017-01-01

    To determine the frequency of breast cancer (BC) patients with hereditary risk features in a wide retrospective cohort of patients in Spain. a retrospective analysis was conducted from 10,638 BC patients diagnosed between 1998 and 2001 in the GEICAM registry "El Álamo III", dividing them into four groups according to modified ESMO and SEOM hereditary cancer risk criteria: Sporadic breast cancer group (R0); Individual risk group (IR); Familial risk group (FR); Individual and familial risk group (IFR) with both individual and familial risk criteria. 7,641 patients were evaluable. Of them, 2,252 patients (29.5%) had at least one hereditary risk criteria, being subclassified in: FR 1.105 (14.5%), IR 970 (12.7%), IFR 177 (2.3%). There was a higher frequency of newly diagnosed metastatic patients in the IR group (5.1% vs 3.2%, p = 0.02). In contrast, in RO were lower proportion of big tumors (> T2) (43.8% vs 47.4%, p = 0.023), nodal involvement (43.4% vs 48.1%, p = 0.004) and lower histological grades (20.9% G3 for the R0 vs 29.8%) when compared to patients with any risk criteria. Almost three out of ten BC patients have at least one hereditary risk cancer feature that would warrant further genetic counseling. Patients with hereditary cancer risk seems to be diagnosed with worse prognosis factors.

  5. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  6. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...

  7. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  8. Post-treatment resistance analysis of hepatitis C virus from phase II and III clinical trials of ledipasvir/sofosbuvir.

    Science.gov (United States)

    Wyles, David; Dvory-Sobol, Hadas; Svarovskaia, Evguenia S; Doehle, Brian P; Martin, Ross; Afdhal, Nezam H; Kowdley, Kris V; Lawitz, Eric; Brainard, Diana M; Miller, Michael D; Mo, Hongmei; Gane, Edward J

    2017-04-01

    Ledipasvir/sofosbuvir combination treatment in phase III clinical trials resulted in sustained viral suppression in 94-99% of patients. This study characterized drug resistance in treatment failures, which may help to inform retreatment options. We performed NS5A and NS5B deep sequencing of hepatitis C virus (HCV) from patients infected with genotype (GT) 1 who participated in ledipasvir/sofosbuvir phase II and III clinical trials. Fifty-one of 2144 (2.4%) (42 GT1a and 9 GT1b) treated patients met the criteria for resistance analysis due to virologic failure following the end of treatment. The majority of patients with virologic failure (38 of 51; 74.5%) had detectable ledipasvir-specific resistance-associated substitutions (RASs) at the time of virologic failure (1% deep sequencing cut-off). The percent of patients with NS5A RASs at virologic failure were 37.5%, 66.7%, 94.7% and 100% in patients treated for 6, 8, 12 and 24weeks, respectively. The common substitutions detected at failure were Q30R/H, and/or Y93H/N in GT1a and Y93H in GT1b. At failure, 35.3% (18/51) of virologic failure patients' viruses had two or more NS5A RASs and the majority of patients harbored NS5A RASs conferring a 100-1000-fold (n=10) or >1000-fold (n=23) reduced susceptibility to ledipasvir. One patient in a phase II study with a known ledipasvir RAS at baseline (L31M) developed the S282T sofosbuvir (NS5B) RAS at failure. In GT1 HCV-infected patients treated with ledipasvir/sofosbuvir±ribavirin, virologic failure was rare. Ledipasvir resistance in NS5A was selected or enhanced in most patients with virologic failure, one of whom also developed resistance to sofosbuvir. Clinical studies have shown that combination treatment with ledipasvir/sofosbuvir efficiently cures most patients with genotype 1 hepatitis C infection. For the few patients failing treatment, we show that resistance to ledipasvir was observed in most patients, whereas resistance to sofosbuvir was less common. This has

  9. Fermilab III

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    The total ongoing plans for Fermilab are wrapped up in the Fermilab III scheme, centrepiece of which is the proposal for a new Main Injector. The Laboratory has been awarded a $200,000 Illinois grant which will be used to initiate environmental assessment and engineering design of the Main Injector, while a state review panel recommended that the project should also benefit from $2 million of funding

  10. Fermilab III

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1990-09-15

    The total ongoing plans for Fermilab are wrapped up in the Fermilab III scheme, centrepiece of which is the proposal for a new Main Injector. The Laboratory has been awarded a $200,000 Illinois grant which will be used to initiate environmental assessment and engineering design of the Main Injector, while a state review panel recommended that the project should also benefit from $2 million of funding.

  11. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  12. CCF analysis of BWR reactor shutdown systems based on the operating experience at the TVO I/II in 1981-1993

    International Nuclear Information System (INIS)

    Mankamo, T.

    1996-04-01

    The work constitutes a part of the project conducted within the research program of the Swedish Nuclear Power Inspectorate SKI, aimed to develop the methods and data base for the Common Cause Failure (CCF) analysis of highly redundant reactor scram systems. The data analysis for the TVO I/II plant is focused on the hydraulic scram system, and control rods and drives. It covers operating experiences from 1981 through 1993. (9 refs., 9 figs., 7 tabs.)

  13. Detection and genome analysis of a Lineage III peste des petits ruminants virus in Kenya in 2011

    International Nuclear Information System (INIS)

    Dundon, W.G.; Kihu, S.; Gitao, G.C.; Bebora, L.C.; John, N.M.; Oyugi, J.O.; Loitsch, A.; Diallo, A.

    2015-01-01

    These data strongly indicate transboundary movement of Lineage III viruses between Eastern Africa countries and has significant implications for surveillance and control of this important disease as it moves southwards in Africa.

  14. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Representing Single-Subject Multivariate Time-Series

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Nesselroade, J.R.

    1998-01-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM),

  15. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.

  16. Comparison of the diagnostic ability of Moorfield’s regression analysis and glaucoma probability score using Heidelberg retinal tomograph III in eyes with primary open angle glaucoma

    Science.gov (United States)

    Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita

    2010-01-01

    Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832

  17. Comparison of the diagnostic ability of Moorfield′s regression analysis and glaucoma probability score using Heidelberg retinal tomograph III in eyes with primary open angle glaucoma

    Directory of Open Access Journals (Sweden)

    Jindal Shveta

    2010-01-01

    Full Text Available Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT glaucoma probability score (GPS with that of Moorfield′s regression analysis (MRA. Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 - 0.315. The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives and least specific criteria (borderline results included as test positives. The MRA sensitivity and specificity were 30.61 and 98% (most specific and 57.14 and 98% (least specific. The GPS sensitivity and specificity were 81.63 and 73.47% (most specific and 95.92 and 34.69% (least specific. The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08 and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44.The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs.

  18. An analysis of Cobit 5 as a framework for the implementation of it governance with reference to King III

    Directory of Open Access Journals (Sweden)

    Maseko, L.

    2016-02-01

    Full Text Available Owing to the complexity and general lack of understanding of information technology (“IT”, the management of IT is often treated as a separately managed value-providing asset. This has resulted in IT rarely receiving the necessary attention of the board, thus creating a disconnect between the board and IT. The King Code of Governance for South Africa 2009 (hereafter referred to as “King III” provides principles and recommended practices for effective IT governance in order to create a greater awareness at board level. King III, however, provides no detailed guidance with regard to the practical implementation of these principles and practices. It is worth noting that numerous international guidelines are recommended within King III that can be adopted as frameworks to assist in the effective implementation of IT governance. COBIT 5 provides, as part of its governance process practices, related guidance activities linking it to the seven IT governance principles of King III, thus making it a practical framework for the implementation of King III recommendations. This study sought to establish the extent to which the governance processes, practices and activities of COBIT 5 are mapped to the recommended practices of IT governance as highlighted in King III in order to resolve COBIT 5 as the de facto framework for IT governance in terms of King III. The study found that though King III principles and practices may be interpreted as vague with regard to how to implement IT governance principles, COBIT 5 succeeds in bridging the gap between control requirements, technical issues, information systems and business risk, which consequently results in a better facilitation of IT governance. The study also revealed that COBIT 5 contains additional activities to assist the board in more transparent reporting of IT performance and conformance management to stakeholders as well activities which enable the connection of resource management with human

  19. Isolation and expression analysis of four HD-ZIP III family genes targeted by microRNA166 in peach.

    Science.gov (United States)

    Zhang, C H; Zhang, B B; Ma, R J; Yu, M L; Guo, S L; Guo, L

    2015-10-30

    MicroRNA166 (miR166) is known to have highly conserved targets that encode proteins of the class III homeodomain-leucine zipper (HD-ZIP III) family, in a broad range of plant species. To further understand the relationship between HD-ZIP III genes and miR166, four HD-ZIP III family genes (PpHB14, PpHB15, PpHB8, and PpREV) were isolated from peach (Prunus persica) tissue and characterized. Spatio-temporal expression profiles of the genes were analyzed. Genes of the peach HD-ZIP III family were predicted to encode five conserved domains. Deduced amino acid sequences and tertiary structures of the four peach HD-ZIP III genes were highly conserved, with corresponding genes in Arabidopsis thaliana. The expression level of four targets displayed the opposite trend to that of miR166 throughout fruit development, with the exception of PpHB14 from 35 to 55 days after full bloom (DAFB). This finding indicates that miR166 may negatively regulate its four targets throughout fruit development. As for leaf and phloem, the same trend in expression level was observed between four targets and miR166 from 75 to 105 DAFB. However, the opposite trend was observed for the transcript level between four targets and miR166 from 35 to 55 DAFB. miRNA166 may negatively regulate four targets in some but not all developmental stages for a given tissue. The four genes studied were observed to have, exactly or generally, the same change tendency as individual tissue development, a finding that suggests genes of the HD-ZIP III family in peach may have complementary or cooperative functions in various tissues.

  20. Dosimetric explanations of fatigue in head and neck radiotherapy: An analysis from the PARSPORT Phase III trial

    International Nuclear Information System (INIS)

    Gulliford, Sarah L.; Miah, Aisha B.; Brennan, Sinead; McQuaid, Dualta; Clark, Catharine H.; Partridge, Mike; Harrington, Kevin J.; Morden, James P.; Hall, Emma; Nutting, Christopher M.

    2012-01-01

    Background: An unexpected finding from the phase III parotid sparing radiotherapy trial, PARSPORT (ISRCTN48243537, CRUK/03/005), was a statistically significant increase in acute fatigue for those patients who were treated with intensity-modulated radiotherapy (IMRT) compared to standard conventional radiotherapy (CRT). One possible explanation was the difference in dose to central nervous system (CNS) structures due to differing beam portals. Using data from the trial, a dosimetric analysis of individual CNS structures was performed. Method: Dosimetric and toxicity data were available for 67 patients (27 CRT, 40 IMRT). Retrospective delineation of the posterior fossa, brainstem, cerebellum, pituitary gland, pineal gland, hypothalamus, hippocampus and basal ganglia was performed. Dosimetry was reviewed using summary statistics and dose–volume atlases. Results: A statistically significant increase in maximum and mean doses to each structure was observed for patients who received IMRT compared to those who received CRT. Both maximum and mean doses were significantly higher for the posterior fossa, brainstem and cerebellum for the 42 patients who reported acute fatigue of Grade 2 or higher (p ⩽ 0.01) compared to the 25 who did not. Dose–volume atlases of the same structures indicated that regions representing larger volumes and higher doses to each structure were consistent with a higher incidence of acute fatigue. There was no association between the dose distribution and acute fatigue for the other structures tested. Conclusions: The excess fatigue reported in the IMRT arm of the trial may, at least in part, be attributed to the dose distribution to the posterior fossa, cerebellum and brainstem. Future studies that modify dose delivery to these structures may allow us to test the hypothesis that radiation-induced fatigue is avoidable.

  1. Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.

    Science.gov (United States)

    Kobert, K; Stamatakis, A; Flouri, T

    2017-03-01

    The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  2. Planck intermediate results: XVI. Profile likelihoods for cosmological parameters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.

    2014-01-01

    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...

  3. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.

    2014-01-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...

  4. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....

  5. The fine-tuning cost of the likelihood in SUSY models

    International Nuclear Information System (INIS)

    Ghilencea, D.M.; Ross, G.G.

    2013-01-01

    In SUSY models, the fine-tuning of the electroweak (EW) scale with respect to their parameters γ i ={m 0 ,m 1/2 ,μ 0 ,A 0 ,B 0 ,…} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Δ of the usual likelihood L and the traditional fine-tuning measure Δ of the EW scale. A similar result is obtained for the integrated likelihood over the set {γ i }, that can be written as a surface integral of the ratio L/Δ, with the surface in γ i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Δ or equivalently, a small χ new 2 =χ old 2 +2lnΔ. This shows the fine-tuning cost to the likelihood (χ new 2 ) of the EW scale stability enforced by SUSY, that is ignored in data fits. A good χ new 2 /d.o.f.≈1 thus demands SUSY models have a fine-tuning amount Δ≪exp(d.o.f./2), which provides a model-independent criterion for acceptable fine-tuning. If this criterion is not met, one can thus rule out SUSY models without a further χ 2 /d.o.f. analysis. Numerical methods to fit the data can easily be adapted to account for this effect.

  6. Kinematic analysis of mandibular motion before and after orthognathic surgery for skeletal Class III malocclusion: A pilot study.

    Science.gov (United States)

    Ugolini, Alessandro; Mapelli, Andrea; Segù, Marzia; Galante, Domenico; Sidequersky, Fernanda V; Sforza, Chiarella

    2017-03-01

    The aim of the study was to detect the changes in 3D mandibular motion after orthognathic surgery for skeletal Class III malocclusion. Using a 3D motion analyzer, free mandibular border movements were recorded in nine patients successfully treated for skeletal Class III malocclusion and in nine patients scheduled for orthognathic surgery. Data were compared using Mann-Whitney non-parametric U-test. The results showed no differences between the groups in the total amount of mouth opening, protrusion, and in lateral excursions, but the percentage of mandibular movement explained by condylar translation was significantly increased after surgery (20% vs. 23.6%). During opening, the post-surgery patients showed a more symmetrical mandibular interincisal point and condylar path than pre-surgery patients (p < 0.01). Patients treated with orthognathic surgery for skeletal Class III malocclusion recover a good and symmetric temporomandibular joint function.

  7. Isolation: analysis and properties of three bradykinin-potentiating peptides (BPP-II, BPP-III, and BPP-V) from Bothrops neuwiedi venom.

    Science.gov (United States)

    Ferreira, L A; Galle, A; Raida, M; Schrader, M; Lebrun, I; Habermehl, G

    1998-04-01

    In the course of systematic investigations on low-molecular-weight compounds from the venom of Crotalidae and Viperidae, we have isolated and characterized at least three bradykinin-potentiating peptides (BPP-II, BPP-III, and BPP-V) from Bothrops neuwiedi venom by gel filtration on Sephadex G-25 M, Sephadex G-10 followed by HPLC. The peptides showed bradykinin-potentiating action on isolated guinea-pig ileum, for which the BPP-V was more active than of BPP-II, and BPP-III, rat arterial blood pressure, and a relevant angiotensin-converting enzyme (ACE) competitive inhibiting activity. The kinetic studies showed a Ki of the order of 9.7 x 10(-3) microM to BPP-II, 7 x 10(-3) microM to BPP-III, and 3.3 x 10(-3) microM to BPP-V. The amino acid sequence of the BPP-III has been determined to be pGlu-Gly-Gly-Trp-Pro-Arg-Pro-Gly-Pro-Glu-Ile-Pro-Pro, and the amino acid compositions of the BPP-II and BPP-V by amino acid analysis were 2Glu-2Gly-1Arg-4Pro-1Ile and 2Glu-2Gly-1Ser-3Pro-2Val-1Ile, with molecular weight of 1372, 1046, and 1078, respectively.

  8. A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood

    Science.gov (United States)

    Enström, Rickard; Schmaltz, Rodney

    2017-01-01

    From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific ‘problem music’ like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals’ risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety. PMID:28539908

  9. A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood.

    Science.gov (United States)

    Enström, Rickard; Schmaltz, Rodney

    2017-01-01

    From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific 'problem music' like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals' risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety.

  10. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    Science.gov (United States)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  11. A Walk on the Wild Side: The Impact of Music on Risk-Taking Likelihood

    Directory of Open Access Journals (Sweden)

    Rickard Enström

    2017-05-01

    Full Text Available From a marketing perspective, there has been substantial interest in on the role of risk-perception on consumer behavior. Specific ‘problem music’ like rap and heavy metal has long been associated with delinquent behavior, including violence, drug use, and promiscuous sex. Although individuals’ risk preferences have been investigated across a range of decision-making situations, there has been little empirical work demonstrating the direct role music may have on the likelihood of engaging in risky activities. In the exploratory study reported here, we assessed the impact of listening to different styles of music while assessing risk-taking likelihood through a psychometric scale. Risk-taking likelihood was measured across ethical, financial, health and safety, recreational and social domains. Through the means of a canonical correlation analysis, the multivariate relationship between different music styles and individual risk-taking likelihood across the different domains is discussed. Our results indicate that listening to different types of music does influence risk-taking likelihood, though not in areas of health and safety.

  12. Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems

    Directory of Open Access Journals (Sweden)

    Li Bing

    2014-01-01

    Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.

  13. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    Science.gov (United States)

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  14. Feasibility analysis of As(III) removal in a continuous flow fixed bed system by modified calcined bauxite (MCB)

    International Nuclear Information System (INIS)

    Bhakat, P.B.; Gupta, A.K.; Ayoob, S.

    2007-01-01

    This study examine the feasibility of As(III) removal from aqueous environment by an adsorbent, modified calcined bauxite (MCB) in a continuous flow fixed bed system. MCB exhibited excellent adsorption capacity of 520.2 mg/L (0.39 mg/g) with an adsorption rate constant 0.7658 L/mg h for an influent As(III) concentration of 1 mg/L. In a 2 cm diameter continuous flow fixed MCB bed, a depth of only 1.765 cm was found necessary to produce effluent As(III) concentration of 0.01 mg/L, from an influent of 1 mg/L at a flow rate of 8 mL/min. Also, bed heights of 10, 20, and 30 cm could treat 427.85, 473.88 and 489.17 bed volumes of water, respectively, to breakthrough. A reduction in adsorption capacity of MCB was observed with increase in flow rates. The theoretical service times evaluated from bed depth service time (BDST) approach for different flow rates and influent As(III) concentrations had shown good correlation with the corresponding experimental values. The theoretical breakthrough curve developed from constantly mixed batch reactor (CMBR) isotherm data also correlated well with experimental breakthrough curve

  15. A Functional Analysis of Circadian Pacemakers in Nocturnal Rodents. III. Heavy Water and Constant Light : Homeostasis of Frequency?

    NARCIS (Netherlands)

    Daan, Serge; Pittendrigh, Colin S.

    1976-01-01

    1. In a preceding paper differences in the lability of the freerunning circadian period (τ) in constant darkness (DD) were described among four species of rodents. This lability (i) is strongly correlated with the responses of τ to (ii) D2O-administration and to (iii) constant light (LL) of various

  16. Orthogonal Higher Order Structure and Confirmatory Factor Analysis of the French Wechsler Adult Intelligence Scale (WAIS-III)

    Science.gov (United States)

    Golay, Philippe; Lecerf, Thierry

    2011-01-01

    According to the most widely accepted Cattell-Horn-Carroll (CHC) model of intelligence measurement, each subtest score of the Wechsler Intelligence Scale for Adults (3rd ed.; WAIS-III) should reflect both 1st- and 2nd-order factors (i.e., 4 or 5 broad abilities and 1 general factor). To disentangle the contribution of each factor, we applied a…

  17. Heterogeneity in the Likelihood of Market Advisory Service Use by U.S. Crop Producers

    NARCIS (Netherlands)

    Pennings, J.M.E.; Irwin, S.; Good, D.; Isengildina, O.

    2005-01-01

    Abstract Analysis of a unique data set of 1,400 U.S. crop producers using a mixture-modeling framework shows that the likelihood of Marketing Advisory Services (MAS) use is, among others, driven by the perceived performance of MAS in terms of return and risk reduction, the match between the MAS and

  18. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Science.gov (United States)

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  19. Cox regression with missing covariate data using a modified partial likelihood method

    DEFF Research Database (Denmark)

    Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.

    2016-01-01

    Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...

  20. Source and Message Factors in Persuasion: A Reply to Stiff's Critique of the Elaboration Likelihood Model.

    Science.gov (United States)

    Petty, Richard E.; And Others

    1987-01-01

    Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…

  1. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  3. Uncertainty about the true source. A note on the likelihood ratio at the activity level.

    Science.gov (United States)

    Taroni, Franco; Biedermann, Alex; Bozza, Silvia; Comte, Jennifer; Garbolino, Paolo

    2012-07-10

    This paper focuses on likelihood ratio based evaluations of fibre evidence in cases in which there is uncertainty about whether or not the reference item available for analysis - that is, an item typically taken from the suspect or seized at his home - is the item actually worn at the time of the offence. A likelihood ratio approach is proposed that, for situations in which certain categorical assumptions can be made about additionally introduced parameters, converges to formula described in existing literature. The properties of the proposed likelihood ratio approach are analysed through sensitivity analyses and discussed with respect to possible argumentative implications that arise in practice. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. The therapeutic effect and possible harm of puerarin for treatment of stage III diabetic nephropathy: a meta-analysis.

    Science.gov (United States)

    Wang, Bin; Chen, Shibo; Yan, Xiufeng; Li, Mingdi; Li, Daqi; Lv, Pin; Ti, Guixiang

    2015-01-01

    Diabetic nephropathy (DN) is the main cause of end-stage kidney disease in developed countries. Current therapy can slow the rate of progression of DN, but eventually end-stage renal failure will occur in a proportion of patients. Identification of new strategies and additional complementary and alternative therapies for treating DN are important. The research team wanted to assess the beneficial and harmful effects of using puerarin plus angiotensin converting enzyme inhibitor (ACEI) compared with using only ACEI for treatment of individuals with stage III DN. The research team performed a meta-analysis of randomized, controlled trials (RCTs) by searching the following electronic databases: (1) the Cochrane Database of Systematic Reviews, (2) the Cochrane Central Register of Controlled Trials (CENTRAL), (3) PubMed, (4) EMBASE (Elsevier), (5) the Allied and Complementary Medicine Database (AMED), (6) the Chinese Biomedicine Database (CBM), (7) the China National Knowledge Infrastructure (CNKI), and (8) the Chinese Biomedical Journals (VIP), with no language restrictions, as well as databases of clinical trials. Measured outcomes included (1) urinary protein measured as urinary albumin excretion rate (UAER) (μg/min) and 24-h urine protein (24-h UP) (mg/24 h); (2) renal function measured as blood urea nitrogen (BUN) (mmol/L) and serum creatinine (SCr) (μmol/L); (3) α1-microglobulin (α1-MG) (mg/24 h) and endothelin-1 (ET-1) (ng/24 h); (4) end points (EPs); and (5) adverse events (AEs). Ten RCTs involving 669 participants were included. All trials were conducted and published in China. Treatment of DN with puerarin plus ACEI significantly decreased the UAER-P < .0001, MD = -23.43 (95% CI, -33.95 to -12.91), and had no effect on 24-h UP-P = .09, MD = -56.76 (95% CI, -122.65 to 9.12); BUN-P = .17, MD = -0.51 (95% CI, -1.24 to 0.21); and SCr-P = .26, MD = -4.43 (95% CI, -12.05 to 3.20). One trial reported abdominal discomfort and nausea (2 cases) in the treatment

  5. Elasto-dynamic analysis of a gear pump-Part III: Experimental validation procedure and model extension to helical gears

    Science.gov (United States)

    Mucchi, E.; Dalpiaz, G.

    2015-01-01

    This work concerns external gear pumps for automotive applications, which operate at high speed and low pressure. In previous works of the authors (Part I and II, [1,2]), a non-linear lumped-parameter kineto-elastodynamic model for the prediction of the dynamic behaviour of external gear pumps was presented. It takes into account the most important phenomena involved in the operation of this kind of machine. The two main sources of noise and vibration are considered: pressure pulsation and gear meshing. The model has been used in order to foresee the influence of working conditions and design modifications on vibration generation. The model's experimental validation is a difficult task. Thus, Part III proposes a novel methodology for the validation carried out by the comparison of simulations and experimental results concerning forces and moments: it deals with the external and inertial components acting on the gears, estimated by the model, and the reactions and inertial components on the pump casing and the test plate, obtained by measurements. The validation is carried out comparing the level of the time synchronous average in the time domain and the waterfall maps in the frequency domain, with particular attention to identify system resonances. The validation results are satisfactory globally, but discrepancies are still present. Moreover, the assessed model has been properly modified for the application to a new virtual pump prototype with helical gears in order to foresee gear accelerations and dynamic forces. Part IV is focused on improvements in the modelling and analysis of the phenomena bound to the pressure evolution around the gears in order to achieve results closer to the measured values. As a matter of fact, the simulation results have shown that a variable meshing stiffness has a notable contribution on the dynamic behaviour of the pump but this is not as important as the pressure phenomena. As a consequence, the original model was modified with the

  6. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    . The standard cointegration analysis only considers the assumption that deviations from equilibrium can be integrated of order zero, which is very restrictive in many cases and may imply an important loss of power in the fractional case. We consider the alternative hypotheses with equilibrium deviations...... that can be mean reverting with order of integration possibly greater than zero. Moreover, the degree of fractional cointegration is not assumed to be known, and the asymptotic null distribution of both tests is found when considering an interval of possible values. The power of the proposed tests under...

  7. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  8. Likelihood ratio model for classification of forensic evidence

    International Nuclear Information System (INIS)

    Zadora, G.; Neocleous, T.

    2009-01-01

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  9. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  10. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  11. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  12. Postoperative opioid sparing with injectable hydroxypropyl-β-cyclodextrin-diclofenac: pooled analysis of data from two Phase III clinical trials

    Directory of Open Access Journals (Sweden)

    Gan TJ

    2016-12-01

    Full Text Available Tong J Gan,1 Neil Singla,2 Stephen E Daniels,3 Douglas A Hamilton,4,5 Peter G Lacouture,6,7 Christian RD Reyes,8 Daniel B Carr4,9 1Department of Anesthesiology, Stony Brook University, NY, 2Lotus Clinical Research, LLC, Pasadena, CA, 3Premier Research, Austin, TX, 4Javelin Pharmaceuticals, Inc., Cambridge, MA, 5New Biology Ventures, LLC, San Mateo, CA, 6Magidom Discovery, LLC, St Augustine, FL, 7Department of Emergency Medicine, Brown University School of Medicine, Providence, RI, 8Hospira Inc., Lake Forest, IL, 9Department of Anesthesiology, Tufts Medical Center, Boston, MA, USA Purpose: Use of nonopioid analgesics (including nonsteroidal anti-inflammatory drugs for postoperative pain management can reduce opioid consumption and potentially prevent opioid-related adverse events. This study examined the postoperative opioid-sparing effect of repeated-dose injectable diclofenac formulated with hydroxypropyl-β-cyclodextrin (HPβCD-diclofenac. Patients and methods: Pooled data from two double-blind, randomized, placebo- and active comparator-controlled Phase III trials were analyzed. Patients received HPβCD-diclofenac, placebo, or ketorolac by intravenous injection every 6 hours for up to 5 days following abdominal/pelvic or orthopedic surgery. Rescue opioid use was evaluated from the time of first study drug administration to up to 120 hours following the first dose in the overall study population and in subgroups defined by baseline pain severity, age, and HPβCD-diclofenac dose. Results: Overall, 608 patients received ≥1 dose of study medication and were included in the analysis. While 93.2% of patients receiving placebo required opioids, the proportion of patients requiring opioids was significantly lower for patients receiving HPβCD-diclofenac (18.75, 37.5, or 50 mg or ketorolac (P<0.005 for all comparisons. Mean cumulative opioid dose and number of doses were significantly lower among patients receiving HPβCD-diclofenac versus placebo

  13. Pre-Preliminary results from the phase III of the IAEA CRP: optimizing of reactor pressure vessel surveillance programmes and their analysis

    Energy Technology Data Exchange (ETDEWEB)

    Brumovsky, M; Gillemot, F; Kryukov, A; Levit, V

    1994-12-31

    This paper gives preliminary results and some conclusions from Phase III of the IAEA Coordinated Research Programme on ``Optimizing the Reactor Pressure Vessel Surveillance Programmes and their Analyses`` carried out during the last seven years in 15 member states. First analysis concerned: comparison of results from initial, un-irradiated materials condition, comparison of transition temperature shifts (from notch toughness testing) with respect to content of residual (P, Cu) and alloying (Ni) elements, type of material (base and weld metal), irradiation temperature (288 and 265 C), and type of fluence dependence. Special effort has been taken to the analysis of the behaviour of a chosen reference steel. (JRQ). 6 figs., 4 tabs.

  14. Assessment of the excitation temperatures and Mg II:I line ratios of the direct current (DC) arc source for the analysis of radioactive materials

    International Nuclear Information System (INIS)

    Manard, B.T.; Matonic, John; Montoya, Dennis; Jump, Robert; Castro, Alonso; Ning Xu

    2017-01-01

    The direct current (DC) arc plasma has been assessed with an emphasis on excitation temperature (T_e_x_e) and ionization/excitation efficiency by monitoring magnesium ionic:atomic ratios (Mg II:I). The primary goal is to improve the analytical performance of the DC arc instrumentation such that more sensitive and reproducible measurements can be achieved when analyzing trace impurities in nuclear materials. Due to the variety of sample types requiring DC arc analysis, an understanding of the plasma's characteristics will significantly benefit the experimental design when moving forward with LANL's capabilities for trace metal analysis of plutonium metals. (author)

  15. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.; Ma, Y.; Sang, H.

    2011-01-01

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  16. Incorporating Nuisance Parameters in Likelihoods for Multisource Spectra

    CERN Document Server

    Conway, J.S.

    2011-01-01

    We describe here the general mathematical approach to constructing likelihoods for fitting observed spectra in one or more dimensions with multiple sources, including the effects of systematic uncertainties represented as nuisance parameters, when the likelihood is to be maximized with respect to these parameters. We consider three types of nuisance parameters: simple multiplicative factors, source spectra "morphing" parameters, and parameters representing statistical uncertainties in the predicted source spectra.

  17. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  18. Spectroscopic analysis of the interaction between tetra-(p-sulfoazophenyl-4-aminosulfonyl-substituted aluminum (III phthalocyanines and serum albumins

    Directory of Open Access Journals (Sweden)

    Liqin Zheng

    2017-03-01

    Full Text Available The binding interaction between tetra-(p-sulfoazophenyl-4-aminosulfonyl-substituted aluminum (III phthalocyanine (AlPc, and two-serum albumins (bovine serum albumin (BSA and human serum albumin (HSA has been investigated. AlPc could quench the intrinsic fluorescence of BSA and HSA through a static quenching process. The primary and secondary binding sites of AlPc on BSA were domain I and III of BSA. The primary binding site of AlPc on HSA was domain I, and the secondary binding sites of AlPc on HSA were found at domains I and II. Our results suggest that AlPc readily interact with BSA and HSA implying that the amphiphilic substituents AlPc may contribute to their transportation in the blood.

  19. Crystallization and preliminary crystallographic analysis of an acridone-producing novel multifunctional type III polyketide synthase from Huperzia serrata

    Energy Technology Data Exchange (ETDEWEB)

    Morita, Hiroyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS), 11 Minamiooya, Machida, Tokyo 194-8511 (Japan); Kondo, Shin; Kato, Ryohei [Innovation Center Yokohama, Mitsubishi Chemical Corporation, 1000 Kamoshida, Aoba, Yokohama, Kanagawa 227-8502 (Japan); Wanibuchi, Kiyofumi; Noguchi, Hiroshi [School of Pharmaceutical Sciences, University of Shizuoka and the COE21 Program, Shizuoka 422-8526 (Japan); Sugio, Shigetoshi [Innovation Center Yokohama, Mitsubishi Chemical Corporation, 1000 Kamoshida, Aoba, Yokohama, Kanagawa 227-8502 (Japan); Abe, Ikuro [School of Pharmaceutical Sciences, University of Shizuoka and the COE21 Program, Shizuoka 422-8526 (Japan); PRESTO, Japan Science and Technology Agency, Kawaguchi, Saitama 332-0012 (Japan); Kohno, Toshiyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS), 11 Minamiooya, Machida, Tokyo 194-8511 (Japan)

    2007-07-01

    An acridone-producing novel type III polyketide synthase from H. serrata has been overexpressed in E. coli, purified and crystallized. Diffraction data have been collected to 2.0 Å. Polyketide synthase 1 (PKS1) from Huperzia serrata is a plant-specific type III polyketide synthase that shows an unusually versatile catalytic potential, producing various aromatic tetraketides, including chalcones, benzophenones, phlorogulucinols and acridones. Recombinant H. serrata PKS1 expressed in Escherichia coli was crystallized using the hanging-drop vapour-diffusion method. The crystals belonged to space group I222 or I2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 73.3, b = 85.0, c = 137.7 Å, α = β = γ = 90.0°. Diffraction data were collected to 2.0 Å resolution using synchrotron radiation at BL24XU of SPring-8.

  20. Thin-plate spline analysis of treatment effects of rapid maxillary expansion and face mask therapy in early Class III malocclusions.

    Science.gov (United States)

    Baccetti, T; Franchi, L; McNamara, J A

    1999-06-01

    An effective morphometric method (thin-plate spline analysis) was applied to evaluate shape changes in the craniofacial configuration of a sample of 23 children with Class III malocclusions in the early mixed dentition treated with rapid maxillary expansion and face mask therapy, and compared with a sample of 17 children with untreated Class III malocclusions. Significant treatment-induced changes involved both the maxilla and the mandible. Major deformations consisted of forward displacement of the maxillary complex from the pterygoid region and of anterior morphogenetic rotation of the mandible, due to a significant upward and forward direction of growth of the mandibular condyle. Significant differences in size changes due to reduced increments in mandibular dimensions were associated with significant shape changes in the treated group.

  1. Analysis of Reasons for fluctuation in seal oil system on generator and countermeasures in Qinshan phase III project

    International Nuclear Information System (INIS)

    Jin Xiaodong

    2012-01-01

    Reasons for hydraulic differential fluctuations seal hydrogen oil on generator in Qinshan phase III project were analyzed, provide a basis for modifying Run method is to determine the causes and effects of seal oil flow changes and in the relationship between flow changes and hydraulic differential hydrogen oil changes according to reason Results were analyzed to adjust the running test, to verify the feasibility of running adjustment programs

  2. Thermal stress analysis and the effect of temperature dependence of material properties on Doublet III limiter design

    International Nuclear Information System (INIS)

    McKelvey, T.E.; Koniges, A.E.; Marcus, F.; Sabado, M.; Smith, R.

    1979-10-01

    Temperature and thermal stress parametric design curves are presented for two materials selected for Doublet III primary limiter applications. INC X-750 is a candidate for the medium Z limiter design and ATJ graphite for the low Z design. The dependence of significant material properties on temperature is shown and the impact of this behavior on the decision to actively or passively cool the limiter is discussed

  3. Dissociating response conflict and error likelihood in anterior cingulate cortex.

    Science.gov (United States)

    Yeung, Nick; Nieuwenhuis, Sander

    2009-11-18

    Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.

  4. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  5. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L.; DuFrain, R.J.

    1986-01-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  6. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  7. Hydration effects on the barrier function of stratum corneum lipids: Raman analysis of ceramides 2, III and 5.

    Science.gov (United States)

    Tfayli, Ali; Jamal, Dima; Vyumvuhore, Raoul; Manfait, Michel; Baillet-Guffroy, Arlette

    2013-11-07

    The stratum corneum is the outermost layer of the skin; its barrier function is highly dependent on the composition and the structure as well as the organization of lipids in its extracellular matrix. Ceramides, free fatty acids and cholesterol represent the major lipid classes present in this matrix. They play an important role in maintaining the normal hydration levels required for the normal physiological function. Despite the advancement in the understanding of the structure, composition and the function of the stratum corneum (SC), the concern of "dry skin" remains important in dermatology and care research. Most studies focus on the quantification of water in the skin using different techniques including Raman spectroscopy, while the studies that investigate the effect of hydration on the quality of the barrier function of the skin are limited. Raman spectroscopy provides structural, conformational and organizational information that could help elucidate the effect of hydration on the barrier function of the skin. In order to assess the effect of relative humidity on the lipid barrier function; we used Raman spectroscopy to follow-up the evolution of the conformation and the organization of three synthetic ceramides (CER) differing from each other by the nature of their polar heads (sphingosine, phytosphingosine and α hydroxyl sphingosine), CER 2, III and 5 respectively. CER III and 5 showed a more compact and ordered organization with stronger polar interactions at intermediate relative humidity values, while CER 2 showed opposite tendencies to those observed with CER III and 5.

  8. Complexes of 4-chlorophenoxyacetates of Nd(III), Gd(III) and Ho(III)

    International Nuclear Information System (INIS)

    Ferenc, W.; Bernat, M; Gluchowska, H.W.; Sarzynski, J.

    2010-01-01

    The complexes of 4-chlorophenoxyacetates of Nd(III), Gd(III) and Ho(III) have been synthesized as polycrystalline hydrated solids, and characterized by elemental analysis, spectroscopy, magnetic studies and also by X-ray diffraction and thermogravimetric measurements. The analysed complexes have the following colours: violet for Nd(III), white for Gd(III) and cream for Ho(III) compounds. The carboxylate groups bind as bidentate chelating (Ho) or bridging ligands (Nd, Gd). On heating to 1173K in air the complexes decompose in several steps. At first, they dehydrate in one step to form anhydrous salts, that next decompose to the oxides of respective metals. The gaseous products of their thermal decomposition in nitrogen were also determined and the magnetic susceptibilities were measured over the temperature range of 76-303K and the magnetic moments were calculated. The results show that 4-chlorophenoxyacetates of Nd(III), Gd(III) and Ho(III) are high-spin complexes with weak ligand fields. The solubility value in water at 293K for analysed 4-chlorophenoxyacetates is in the order of 10 -4 mol/dm 3 . (author)

  9. Calculus III essentials

    CERN Document Server

    REA, Editors of

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Calculus III includes vector analysis, real valued functions, partial differentiation, multiple integrations, vector fields, and infinite series.

  10. Analysis, by Relap5 code, of boron dilution phenomena in a Small Break Loca Transient, performed in PKL III E 2.2 test

    International Nuclear Information System (INIS)

    Rizzo, G.; Vella, G.

    2007-01-01

    The present work is finalized to investigate the E2.2 thermal-hydraulics transient of the PKL III facility, which is a scaled reproduction of a typical German PWR, operated by FRAMATOME-ANP in Erlangen, Germany, within the framework of an international cooperation (OECD/SETH project). The main purpose of the project is to study boron dilution events in Pressurized Water Reactors and to contribute to the assessment of thermal-hydraulic system codes like Relap5. The experimental test PKL III E2.2 investigates the behavior of a typical PWR after a Small Break Loss Of Coolant Accident (SB-LOCA) in a cold leg and an immediate injection of borated water in two cold legs. The main purpose of this work is to simulate the PKL III test facility and particularly its experimental transient by Relap5 system code. The adopted nodalization, already available at Department of Nuclear Engineering (DIN), has been reviewed and applied with an accurate analysis of the experimental test parameters. The main result relies in a good agreement of calculated data with experimental measures for a number of main important variables. (author)

  11. Can we eliminate neoadjuvant chemoradiotherapy in favor of neoadjuvant multiagent chemotherapy for select stage II/III rectal adenocarcinomas: Analysis of the National Cancer Data base.

    Science.gov (United States)

    Cassidy, Richard J; Liu, Yuan; Patel, Kirtesh; Zhong, Jim; Steuer, Conor E; Kooby, David A; Russell, Maria C; Gillespie, Theresa W; Landry, Jerome C

    2017-03-01

    Stage II and III rectal cancers have been effectively treated with neoadjuvant chemoradiotherapy (NCRT) followed by definitive resection. Advancements in surgical technique and systemic therapy have prompted investigation of neoadjuvant multiagent chemotherapy (NMAC) regimens with the elimination of radiation (RT). The objective of the current study was to investigate factors that predict for the use of NCRT versus NMAC and compare outcomes using the National Cancer Data Base (NCDB) for select stage II and III rectal cancers. In the NCDB, 21,707 patients from 2004 through 2012 with clinical T2N1 (cT2N1), cT3N0, or cT3N1 rectal cancers were identified who had received NCRT or NMAC followed by low anterior resection. Kaplan-Meier analyses, log-rank tests, and Cox-proportional hazards regression analyses were conducted along with propensity score matching analysis to reduce treatment selection bias. The 5-year actuarial overall survival (OS) rate was 75% for patients who received NCRT versus 67.2% for those who received NMAC (P elimination of neoadjuvant RT for select patients with stage II and III rectal adenocarcinoma was associated with worse OS and should not be recommended outside of a clinical trial. Cancer 2017;123:783-93. © 2016 American Cancer Society. © 2016 American Cancer Society.

  12. Harvesting and wood transport planning with SNAP III program (Scheduling and Network Analysis Program in a pine plantation in Southeast Brazil

    Directory of Open Access Journals (Sweden)

    Lopes Eduardo da Silva

    2003-01-01

    Full Text Available The objective of this study was to verify the potential of SNAP III (Scheduling and Network Analysis Program as a support tool for harvesting and wood transport planning in Brazil harvesting subsystem definition and establishment of a compatible route were assessed. Initially, machine operational and production costs were determined in seven subsystems for the study area, and quality indexes, construction and maintenance costs of forest roads were obtained and used as SNAP III program input data. The results showed, that three categories of forest road occurrence were observed in the study area: main, secondary and tertiary which, based on quality index, allowed a medium vehicle speed of about 41, 30 and 24 km/hours and a construction cost of about US$ 5,084.30, US$ 2,275.28 and US$ 1,650.00/km, respectively. The SNAP III program used as a support tool for the planning, was found to have a high potential tool in the harvesting and wood transport planning. The program was capable of defining efficiently, the harvesting subsystem on technical and economical basis, the best wood transport route and the forest road to be used in each period of the horizon planning.

  13. Hot prominence detected in the core of a coronal mass ejection. II. Analysis of the C III line detected by SOHO/UVCS

    Science.gov (United States)

    Jejčič, S.; Susino, R.; Heinzel, P.; Dzifčáková, E.; Bemporad, A.; Anzer, U.

    2017-11-01

    Context. We study the physics of erupting prominences in the core of coronal mass ejections (CMEs) and present a continuation of a previous analysis. Aims: We determine the kinetic temperature and microturbulent velocity of an erupting prominence embedded in the core of a CME that occurred on August 2, 2000 using the Ultraviolet Coronagraph and Spectrometer observations (UVCS) on board the Solar and Heliospheric Observatory (SOHO) simultaneously in the hydrogen Lα and C III lines. We develop the non-LTE (departures from the local thermodynamic equilibrium - LTE) spectral diagnostics based on Lα and Lβ measured integrated intensities to derive other physical quantities of the hot erupting prominence. Based on this, we synthesize the C III line intensity to compare it with observations. Methods: Our method is based on non-LTE modeling of eruptive prominences. We used a general non-LTE radiative-transfer code only for optically thin prominence points because optically thick points do not allow the direct determination of the kinetic temperature and microturbulence from the line profiles. The input parameters of the code were the kinetic temperature and microturbulent velocity derived from the Lα and C III line widths, as well as the integrated intensity of the Lα and Lβ lines. The code runs in three loops to compute the radial flow velocity, electron density, and effective thickness as the best fit to the Lα and Lβ integrated intensities within the accuracy defined by the absolute radiometric calibration of UVCS data. Results: We analyzed 39 observational points along the whole erupting prominence because for these points we found a solution for the kinetic temperature and microturbulent velocity. For these points we ran the non-LTE code to determine best-fit models. All models with τ0(Lα) ≤ 0.3 and τ0(C III) ≤ 0.3 were analyzed further, for which we computed the integrated intensity of the C III line using a two-level atom. The best agreement between

  14. Smoking increases the likelihood of Helicobacter pylori treatment failure.

    Science.gov (United States)

    Itskoviz, David; Boltin, Doron; Leibovitzh, Haim; Tsadok Perets, Tsachi; Comaneshter, Doron; Cohen, Arnon; Niv, Yaron; Levi, Zohar

    2017-07-01

    Data regarding the impact of smoking on the success of Helicobacter pylori (H. pylori) eradication are conflicting, partially due to the fact that sociodemographic status is associated with both smoking and H. pylori treatment success. We aimed to assess the effect of smoking on H. pylori eradication rates after controlling for sociodemographic confounders. Included were subjects aged 15 years or older, with a first time positive C 13 -urea breath test (C 13 -UBT) between 2007 to 2014, who underwent a second C 13 -UBT after receiving clarithromycin-based triple therapy. Data regarding age, gender, socioeconomic status (SES), smoking (current smokers or "never smoked"), and drug use were extracted from the Clalit health maintenance organization database. Out of 120,914 subjects with a positive first time C 13 -UBT, 50,836 (42.0%) underwent a second C 13 -UBT test. After excluding former smokers, 48,130 remained who were eligible for analysis. The mean age was 44.3±18.2years, 69.2% were females, 87.8% were Jewish and 12.2% Arabs, 25.5% were current smokers. The overall eradication failure rates were 33.3%: 34.8% in current smokers and 32.8% in subjects who never smoked. In a multivariate analysis, eradication failure was positively associated with current smoking (Odds Ratio {OR} 1.15, 95% CI 1.10-1.20, psmoking was found to significantly increase the likelihood of unsuccessful first-line treatment for H. pylori infection. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  15. Analysis of the structure of poly-3-hydroxybutyrate ultrathin fibers modified with iron (III) complex with tetraphenylporphyrin

    Science.gov (United States)

    Olkhov, A. A.; Karpova, S. G.; Lobanov, A. V.; Tyubaeva, P. M.; Artemov, N. S.; Iordansky, A. L.

    2017-12-01

    In the treatment of many infectious diseases and cancer, transdermal systems based on solid polymer matrices or gels containing functional substances with antiseptic (antibacterial) properties are often used. One of the most promising types of matrices with antiseptic properties are the ones of nano- and microfiber-bonded cloth obtained by electrospinning based on biopolymer poly(3-hydroxybutyrate). The present work investigates the effects of iron (III) complex with tetraphenylporphyrin and the influence on the geometry, crystalline order and molecular dynamics in the intercrystalline (amorphous phase) of ultrathin PHB fibers.

  16. Endovascular Therapy is Effective and Safe for Patients with Severe Ischemic Stroke: Pooled Analysis of IMS III and MR CLEAN Data

    Science.gov (United States)

    Broderick, Joseph P.; Berkhemer, Olvert A.; Palesch, Yuko Y.; Dippel, Diederik W.J.; Foster, Lydia D.; Roos, Yvo B.W.E.M.; van der Lugt, Aad; Tomsick, Thomas A.; Majoie, Charles B.L.M.; van Zwam, Wim H.; Demchuk, Andrew M.; van Oostenbrugge, Robert J.; Khatri, Pooja; Lingsma, Hester F.; Hill, Michael D.; Roozenbeek, Bob; Jauch, Edward C.; Jovin, Tudor G.; Yan, Bernard; von Kummer, Rüdiger; Molina, Carlos A.; Goyal, Mayank; Schonewille, Wouter J.; Mazighi, Mikael; Engelter, Stefan T.; Anderson, Craig S.; Spilker, Judith; Carrozzella, Janice; Ryckborst, Karla J.; Janis, L. Scott; Simpson, Kit

    2015-01-01

    Background and Purpose We assessed the effect of endovascular treatment in acute ischemic stroke patients with severe neurological deficit (NIHSS ≥20) following a pre-specified analysis plan. Methods The pooled analysis of the IMS III and MR CLEAN trial included participants with an NIHSS ≥20 prior to intravenous (IV) t-PA treatment (IMS III) or randomization (MR CLEAN) who were treated with IV t-PA ≤ 3 hours of stroke onset. Our hypothesis was that participants with severe stroke randomized to endovascular therapy following IV t-PA would have improved 90-day outcome (distribution of modified Rankin scale [mRS] scores), as compared to those who received IV t-PA alone. Results Among 342 participants in the pooled analysis (194 from IMS III, 148 from MR CLEAN), an ordinal logistic regression model showed that the endovascular group had superior 90-day outcome compared to the IV t-PA group (adjusted odds ratio [aOR] 1.78; 95% confidence interval [CI] 1.20-2.66). In the logistic regression model of the dichotomous outcome (mRS 0-2, or ‘functional independence’), the endovascular group had superior outcomes (aOR 1.97; 95% CI 1.09-3.56). Functional independence (mRS ≤2) at 90 days was 25% in the endovascular group as compared to 14% in the IV t-PA group. Conclusions Endovascular therapy following IV t-PA within 3 hours of symptom onset improves functional outcome at 90 days after severe ischemic stroke. PMID:26486865

  17. Clonal structure of Trypanosoma cruzi Colombian strain (biodeme Type III: biological, isoenzymic and histopathological analysis of seven isolated clones

    Directory of Open Access Journals (Sweden)

    Camandaroba Edson Luiz Paes

    2001-01-01

    Full Text Available The clonal structure of the Colombian strain of Trypanosoma cruzi, biodeme Type III and zymodeme 1, was analyzed in order to characterize its populations and to establish its homogeneity or heterogeneity. Seven isolated clones presented the basic characteristics of Biodeme Type III, with the same patterns of parasitemic curves, tissue tropism to skeletal muscle and myocardium, high pathogenicity with extensive necrotic-inflammatory lesions from the 20th to 30th day of infection. The parental strain and its clones C1, C3, C4 and C6, determined the higher levels of parasitemia, 20 to 30 days of infection, with high mortality rate up to 30 days (79 to 100%; clones C2, C5 and C7 presented lower levels of parasitemia, with low mortality rates (7.6 to 23%. Isoenzymic patterns, characteristic of zymodeme 1, (Z1 were similar for the parental strain and its seven clones. Results point to a phenotypic homogeneity of the clones isolated from the Colombian strain and suggest the predominance of a principal clone, responsible for the biological behavior of the parental strain and clones.

  18. Crystallization and preliminary X-ray crystallographic analysis of the ArsM arsenic(III) S-adenosylmethionine methyltransferase

    International Nuclear Information System (INIS)

    Marapakala, Kavitha; Ajees, A. Abdul; Qin, Jie; Sankaran, Banumathi; Rosen, Barry P.

    2010-01-01

    A common biotransformation of arsenic is methylation to monomethylated, dimethylated and trimethylated species, which is catalyzed by the ArsM (or AS3MT) arsenic(III) S-adenosylmethionine methyltransferase. ArsM from the acidothermophilic alga Cyanidioschyzon sp. 5508 was expressed, purified and crystallized by the hanging-drop vapor-diffusion method and diffraction data were collected to 1.76 Å resolution. Arsenic is the most ubiquitous environmental toxin and carcinogen and consequently ranks first on the Environmental Protection Agency’s Superfund Priority List of Hazardous Substances. It is introduced primarily from geochemical sources and is acted on biologically, creating an arsenic biogeocycle. A common biotransformation is methylation to monomethylated, dimethylated and trimethylated species. Methylation is catalyzed by the ArsM (or AS3MT) arsenic(III) S-adenosylmethionine methyltransferase, an enzyme (EC 2.1.1.137) that is found in members of every kingdom from bacteria to humans. ArsM from the thermophilic alga Cyanidioschyzon sp. 5508 was expressed, purified and crystallized. Crystals were obtained by the hanging-drop vapor-diffusion method. The crystals belonged to the monoclinic space group C2, with unit-cell parameters a = 84.85, b = 46.89, c = 100.35 Å, β = 114.25° and one molecule in the asymmetric unit. Diffraction data were collected at the Advanced Light Source and were processed to a resolution of 1.76 Å

  19. FEMAXI-III: a computer code for the analysis of thermal and mechanical behavior of fuel rods

    International Nuclear Information System (INIS)

    Nakajima, Tetsuo; Ichikawa, Michio; Iwano, Yoshihiko; Ito, Kenichi; Saito, Hiroaki; Kashima, Koichi; Kinoshita, Motoyasu; Okubo, Tadatsune.

    1985-12-01

    FEMAXI-III is a computer code to predict the thermal and mechanical behavior of a light water fuel rod during its irradiation life. It can analyze the integral behavior of a whole fuel rod throughout its life, as well as the localized behavior of a small part of fuel rod. The localized mechanical behavior such as the cladding ridge deformation is analyzed by the two-dimensional axisymmetric finite element method. FEMAXI-III calculates, in particular, the temperature distribution, the radial deformation, the fission gas release, and the inner gas pressure as a function of irradiation time and axial position, and the stresses and strains in the fuel and cladding at a small part of fuel rod as a function of irradiation time. For this purpose, Elasto-plasticity, creep, thermal expansion, fuel cracking and crack healing, relocation, densification, swelling, hot pressing, heat generation distribution, fission gas release, and fuel-cladding mechanical interaction are modelled and their interconnected effects are considered in the code. Efforts have been made to improve the accuracy and stability of finite element solution and to minimize the computer memory and running time. This report describes the outline of the code and the basic models involved, and also includes the application of the code and its input manual. (author)

  20. ldr: An R Software Package for Likelihood-Based Su?cient Dimension Reduction

    Directory of Open Access Journals (Sweden)

    Kofi Placid Adragni

    2014-11-01

    Full Text Available In regression settings, a su?cient dimension reduction (SDR method seeks the core information in a p-vector predictor that completely captures its relationship with a response. The reduced predictor may reside in a lower dimension d < p, improving ability to visualize data and predict future observations, and mitigating dimensionality issues when carrying out further analysis. We introduce ldr, a new R software package that implements three recently proposed likelihood-based methods for SDR: covariance reduction, likelihood acquired directions, and principal fitted components. All three methods reduce the dimensionality of the data by pro jection into lower dimensional subspaces. The package also implements a variable screening method built upon principal ?tted components which makes use of ?exible basis functions to capture the dependencies between the predictors and the response. Examples are given to demonstrate likelihood-based SDR analyses using ldr, including estimation of the dimension of reduction subspaces and selection of basis functions. The ldr package provides a framework that we hope to grow into a comprehensive library of likelihood-based SDR methodologies.

  1. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  2. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  3. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  4. Likelihood ratio sequential sampling models of recognition memory.

    Science.gov (United States)

    Osth, Adam F; Dennis, Simon; Heathcote, Andrew

    2017-02-01

    The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Estimating likelihood of future crashes for crash-prone drivers

    Directory of Open Access Journals (Sweden)

    Subasish Das

    2015-06-01

    Full Text Available At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the at-fault drivers. The logistic regression method is used by employing eight years' traffic crash data (2004–2011 in Louisiana. Crash predictors such as the driver's crash involvement, crash and road characteristics, human factors, collision type, and environmental factors are considered in the model. The at-fault and not-at-fault status of the crashes are used as the response variable. The developed model has identified a few important variables, and is used to correctly classify at-fault crashes up to 62.40% with a specificity of 77.25%. This model can identify as many as 62.40% of the crash incidence of at-fault drivers in the upcoming year. Traffic agencies can use the model for monitoring the performance of an at-fault crash-prone drivers and making roadway improvements meant to reduce crash proneness. From the findings, it is recommended that crash-prone drivers should be targeted for special safety programs regularly through education and regulations.

  6. Obstetric History and Likelihood of Preterm Birth of Twins.

    Science.gov (United States)

    Easter, Sarah Rae; Little, Sarah E; Robinson, Julian N; Mendez-Figueroa, Hector; Chauhan, Suneet P

    2018-01-05

     The objective of this study was to investigate the relationship between preterm birth in a prior pregnancy and preterm birth in a twin pregnancy.  We performed a secondary analysis of a randomized controlled trial evaluating 17-α-hydroxyprogesterone caproate in twins. Women were classified as nulliparous, multiparous with a prior term birth, or multiparous with a prior preterm birth. We used logistic regression to examine the odds of spontaneous preterm birth of twins before 35 weeks according to past obstetric history.  Of the 653 women analyzed, 294 were nulliparas, 310 had a prior term birth, and 49 had a prior preterm birth. Prior preterm birth increased the likelihood of spontaneous delivery before 35 weeks (adjusted odds ratio [aOR]: 2.44, 95% confidence interval [CI]: 1.28-4.66), whereas prior term delivery decreased these odds (aOR: 0.55, 95% CI: 0.38-0.78) in the current twin pregnancy compared with the nulliparous reference group. This translated into a lower odds of composite neonatal morbidity (aOR: 0.38, 95% CI: 0.27-0.53) for women with a prior term delivery.  For women carrying twins, a history of preterm birth increases the odds of spontaneous preterm birth, whereas a prior term birth decreases odds of spontaneous preterm birth and neonatal morbidity for the current twin pregnancy. These results offer risk stratification and reassurance for clinicians. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  7. A Deep Chandra ACIS Study of NGC 4151. III. The Line Emission and Spectral Analysis of the Ionization Cone

    Science.gov (United States)

    Wang, Junfeng; Fabbiano, Giuseppina; Elvis, Martin; Risaliti, Guido; Karovska, Margarita; Zezas, Andreas; Mundell, Carole G.; Dumas, Gaelle; Schinnerer, Eva

    2011-11-01

    This paper is the third in a series in which we present deep Chandra ACIS-S imaging spectroscopy of the Seyfert 1 galaxy NGC 4151, devoted to study its complex circumnuclear X-ray emission. Emission features in the soft X-ray spectrum of the bright extended emission (L 0.3-2 keV ~ 1040 erg s-1) at r > 130 pc (2'') are consistent with blended brighter O VII, O VIII, and Ne IX lines seen in the Chandra HETGS and XMM-Newton RGS spectra below 2 keV. We construct emission line images of these features and find good morphological correlations with the narrow-line region clouds mapped in [O III] λ5007. Self-consistent photoionization models provide good descriptions of the spectra of the large-scale emission, as well as resolved structures, supporting the dominant role of nuclear photoionization, although displacement of optical and X-ray features implies a more complex medium. Collisionally ionized emission is estimated to be lsim12% of the extended emission. Presence of both low- and high-ionization spectral components and extended emission in the X-ray image perpendicular to the bicone indicates leakage of nuclear ionization, likely filtered through warm absorbers, instead of being blocked by a continuous obscuring torus. The ratios of [O III]/soft X-ray flux are approximately constant (~15) for the 1.5 kpc radius spanned by these measurements, indicating similar relative contributions from the low- and high-ionization gas phases at different radial distances from the nucleus. If the [O III] and X-ray emission arise from a single photoionized medium, this further implies an outflow with a wind-like density profile. Using spatially resolved X-ray features, we estimate that the mass outflow rate in NGC 4151 is ~2 M ⊙ yr-1 at 130 pc and the kinematic power of the ionized outflow is 1.7 × 1041 erg s-1, approximately 0.3% of the bolometric luminosity of the active nucleus in NGC 4151.

  8. Unbinned likelihood maximisation framework for neutrino clustering in Python

    Energy Technology Data Exchange (ETDEWEB)

    Coenders, Stefan [Technische Universitaet Muenchen, Boltzmannstr. 2, 85748 Garching (Germany)

    2016-07-01

    Albeit having detected an astrophysical neutrino flux with IceCube, sources of astrophysical neutrinos remain hidden up to now. A detection of a neutrino point source is a smoking gun for hadronic processes and acceleration of cosmic rays. The search for neutrino sources has many degrees of freedom, for example steady versus transient, point-like versus extended sources, et cetera. Here, we introduce a Python framework designed for unbinned likelihood maximisations as used in searches for neutrino point sources by IceCube. Implementing source scenarios in a modular way, likelihood searches on various kinds can be implemented in a user-friendly way, without sacrificing speed and memory management.

  9. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  10. A note on estimating errors from the likelihood function

    International Nuclear Information System (INIS)

    Barlow, Roger

    2005-01-01

    The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed

  11. Nearly Efficient Likelihood Ratio Tests for Seasonal Unit Roots

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    In an important generalization of zero frequency autore- gressive unit root tests, Hylleberg, Engle, Granger, and Yoo (1990) developed regression-based tests for unit roots at the seasonal frequencies in quarterly time series. We develop likelihood ratio tests for seasonal unit roots and show...... that these tests are "nearly efficient" in the sense of Elliott, Rothenberg, and Stock (1996), i.e. that their local asymptotic power functions are indistinguishable from the Gaussian power envelope. Currently available nearly efficient testing procedures for seasonal unit roots are regression-based and require...... the choice of a GLS detrending parameter, which our likelihood ratio tests do not....

  12. LDR: A Package for Likelihood-Based Sufficient Dimension Reduction

    Directory of Open Access Journals (Sweden)

    R. Dennis Cook

    2011-03-01

    Full Text Available We introduce a new mlab software package that implements several recently proposed likelihood-based methods for sufficient dimension reduction. Current capabilities include estimation of reduced subspaces with a fixed dimension d, as well as estimation of d by use of likelihood-ratio testing, permutation testing and information criteria. The methods are suitable for preprocessing data for both regression and classification. Implementations of related estimators are also available. Although the software is more oriented to command-line operation, a graphical user interface is also provided for prototype computations.

  13. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  14. Near infrared spectroscopy combined with multivariate analysis for monitoring the ethanol precipitation process of fraction I + II + III supernatant in human albumin separation

    Science.gov (United States)

    Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian

    2017-03-01

    Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I + II + III (FI + II + III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (Rp2), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501 g/L, 0.465 g/L and 5.57 for TP, and 0.969, 0.530 g/L, 0.341 g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI + II + III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS.

  15. Preoperative chemoradiotherapy versus postoperative chemoradiotherapy for stage II–III resectable rectal cancer: a meta-analysis of randomized controlled trials

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jin Ho [Gyeongsang National University School of Medicine, Jinju (Korea, Republic of); Jeong, Jae Uk [Chonnam National University School of Medicine, Gwangju (Korea, Republic of); Lee, Jong Hoon; Kim, Sung Hwan [The Catholic University of Korea, Suwon (Korea, Republic of); Cho, Hyeon Min [The Catholic University of Korea, Suwon (Korea, Republic of); Um, Jun Won [University Ansan Hospital, Ansan (Korea, Republic of); Jang, Hong Seok [The Catholic University of Korea, Seoul (Korea, Republic of)

    2017-09-15

    Whether preoperative chemoradiotherapy (CRT) is better than postoperative CRT in oncologic outcome and toxicity is contentious in prospective randomized clinical trials. We systematically analyze and compare the treatment result, toxicity, and sphincter preservation rate between preoperative CRT and postoperative CRT in stage II–III rectal cancer. We searched Medline, Embase, and Cochrane Library from 1990 to 2014 for relevant trials. Only phase III randomized studies performing CRT and curative surgery were selected and the data were extracted. Meta-analysis was used to pool oncologic outcome and toxicity data across studies. Three randomized phase III trials were finally identified. The meta-analysis results showed significantly lower 5-year locoregional recurrence rate in the preoperative-CRT group than in the postoperative-CRT group (hazard ratio, 0.59; 95% confidence interval, 0.41–0.84; p = 0.004). The 5-year distant recurrence rate (p = 0.55), relapse-free survival (p = 0.14), and overall survival (p = 0.22) showed no significant difference between two groups. Acute toxicity was significantly lower in the preoperativeCRT group than in the postoperative-CRT group (p < 0.001). However, there was no significant difference between two groups in perioperative and chronic complications (p = 0.53). The sphincter-saving rate was not significantly different between two groups (p = 0.24). The conversion rate from abdominoperineal resection to low anterior resection in low rectal cancer was significantly higher in the preoperative-CRT group than in the postoperative-CRT group (p < 0.001). As compared to postoperative CRT, preoperative CRT improves only locoregional control, not distant control and survival, with similar chronic toxicity and sphincter preservation rate in rectal cancer patients.

  16. Clinical Signatures of Mucinous and Poorly Differentiated Subtypes of Colorectal Adenocarcinomas by a Propensity Score Analysis of an Independent Patient Database from Three Phase III Trials.

    Science.gov (United States)

    Kanda, Mitsuro; Oba, Koji; Aoyama, Toru; Kashiwabara, Kosuke; Mayanagi, Shuhei; Maeda, Hiromichi; Honda, Michitaka; Hamada, Chikuma; Sadahiro, Sotaro; Sakamoto, Junichi; Saji, Shigetoyo; Yoshikawa, Takaki

    2018-04-01

    Although colorectal cancer comprises several histological subtypes, the influences of histological subtypes on disease progression and treatment responses remain controversial. We sought to evaluate the prognostic relevance of mucinous and poorly differentiated histological subtypes of colorectal cancer by the propensity score weighting analysis of prospectively collected data from multi-institute phase III trials. Independent patient data analysis of a pooled database from 3 phase III trials was performed. An integrated database of 3 multicenter prospective clinical trials (the Japanese Foundation for Multidisciplinary Treatment of Cancer 7, 15, and 33) was the source of study data. Surgery alone or postoperative adjuvant chemotherapy was offered in patients with resectable colorectal cancer. To balance essential variables more strictly for the comparison analyses, propensity score weighting was conducted with the use of a multinomial logistic regression model. We evaluated the clinical signatures of mucinous and poorly differentiated subtypes with regard to postoperative survival, recurrence, and chemosensitivity. Of 5489 patients, 136 (2.5%) and 155 (2.8%) were pathologically diagnosed with poorly differentiated and mucinous subtypes. The poorly differentiated subtypes were associated with a poorer prognosis than the "others" group (HR, 1.69; 95% CI, 1.00-2.87; p = 0.051), particularly in the patient subgroup of adjuvant chemotherapy (HR, 2.16). Although the mucinous subtype had a marginal prognostic impact among patients with stage I to III colorectal cancer (HR, 1.33; 95% CI, 0.90-1.96), it was found to be an independent prognostic factor in the subpopulation of patients with stage II disease, being associated with a higher prevalence of peritoneal recurrence. The treatment regimens of postoperative chemotherapy are now somewhat outdated. Both mucinous and poorly differentiated subtypes have distinct clinical characteristics. Patients with the mucinous subtype

  17. Analysis, by RELAP5 code, of boron dilution phenomena in a mid-loop operation transient, performed in PKL III F2.1 RUN 1 test

    International Nuclear Information System (INIS)

    Mascari, F.; Vella, G.; Del Nevo, A.; D'Auria, F.

    2007-01-01

    The present paper deals with the post test analysis and accuracy quantification of the test PKL III F2.1 RUN 1 by RELAP5/Mod3.3 code performed in the framework of the international OECD/SETH PKL III Project. The PKL III is a full-height integral test facility (ITF) that models the entire primary system and most of the secondary system (except for turbine and condenser) of pressurized water reactor of KWU design of the 1300-MW (electric) class on a scale of 1:145. Detailed design was based to the largest possible extent on the specific data of Philippsburg nuclear power plant, unit 2. As for the test facilities of this size, the scaling concept aims to simulate overall thermal hydraulic behavior of the full-scale power plant [1]. The main purpose of the project is to investigate PWR safety issues related to boron dilution and in particular this experiment investigates (a) the boron dilution issue during mid-loop operation and shutdown conditions, and (b) assessing primary circuit accident management operations to prevent boron dilution as a consequence of loss of heat removal [2]. In this work the authors deal with a systematic procedure (developed at the university of Pisa) for code assessment and uncertainty qualification and its application to RELAP5 system code. It is used to evaluate the capability of RELAP5 to reproduce the thermal hydraulics of an inadvertent boron dilution event in a PWR. The quantitative analysis has been performed adopting the Fast Fourier Transform Based Method (FFTBM), which has the capability to quantify the errors in code predictions as compared to the measured experimental signal. (author)

  18. Near infrared spectroscopy combined with multivariate analysis for monitoring the ethanol precipitation process of fraction I+II+III supernatant in human albumin separation.

    Science.gov (United States)

    Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian

    2017-03-15

    Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I+II+III (FI+II+III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (R p 2 ), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501g/L, 0.465g/L and 5.57 for TP, and 0.969, 0.530g/L, 0.341g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI+II+III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Analysis of polyethoxylated surfactants in microemulsion-oil-water systems III. Fractionation and partitioning of polyethoxylated alcohol surfactants

    International Nuclear Information System (INIS)

    Marquez, N.; Bravo, B.; Ysambertt, F.; Chavez, G.; Subero, N.; Salager, J.L.

    2002-01-01

    Oligomer distribution of polyethoxylated alcohol and polyethoxylated nonylphenol surfactants is studied by normal and reverse-phase high performance liquid chromatography (HPLC). A RP8 column is able to efficiently separate these surfactants according to their alkyl chain (lipophilic) group, while silica and amino columns separate them according to their polyether chain length (hydrophilic group). Polyethoxylated alcohol and polyethoxylated nonylphenol oligomers selectively partition between the microemulsion-oil-water phases of a Winsor III system. Partitioning of these oligomers was analyzed by HPLC with RI detection. The logarithm of the partition coefficient between the water and oil linearly increases with the number of ethylene oxide groups per molecule of oligomer. For a same ethoxylation degree, the partition coefficient of a polyethoxylated tridecanol is found to be higher than the one of the corresponding nonylphenol specie. On the other hand, a polyethoxylated nonylphenol exhibits a higher solubilization than the matching polyethoxylated alcohol

  20. Crystallization and preliminary crystallographic analysis of a novel plant type III polyketide synthase that produces pentaketide chromone

    Energy Technology Data Exchange (ETDEWEB)

    Morita, Hiroyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS), 11 Minamiooya, Machida, Tokyo 194-8511 (Japan); Kondo, Shin [ZOEGENE Corporation, 1000 Kamoshida, Aoba, Yokohama, Kanagawa 227-8502 (Japan); Abe, Tsuyoshi; Noguchi, Hiroshi [School of Pharmaceutical Sciences and the COE21 Program, University of Shizuoka, Shizuoka 422-8526 (Japan); Sugio, Shigetoshi, E-mail: ssugio@rc.m-kagaku.co.jp [ZOEGENE Corporation, 1000 Kamoshida, Aoba, Yokohama, Kanagawa 227-8502 (Japan); Abe, Ikuro, E-mail: ssugio@rc.m-kagaku.co.jp [School of Pharmaceutical Sciences and the COE21 Program, University of Shizuoka, Shizuoka 422-8526 (Japan); PRESTO, Japan Science and Technology Agency, Kawaguchi, Saitama 332-0012 (Japan); Kohno, Toshiyuki, E-mail: ssugio@rc.m-kagaku.co.jp [Mitsubishi Kagaku Institute of Life Sciences (MITILS), 11 Minamiooya, Machida, Tokyo 194-8511 (Japan)

    2006-09-01

    Pentaketide chromone synthase from A. arborescens has been overexpressed in E. coli, purified and crystallized. Diffraction data have been collected to 1.6 Å. Pentaketide chromone synthase (PCS) from Aloe arborescens is a novel plant-specific type III polyketide synthase that catalyzes the formation of 5,7-dihydroxy-2-methylchromone from five molecules of malonyl-CoA. Recombinant PCS expressed in Escherichia coli was crystallized by the hanging-drop vapour-diffusion method. The crystals belonged to space group P2{sub 1}, with unit-cell parameters a = 73.2, b = 88.4, c = 70.0 Å, α = γ = 90.0, β = 95.6°. Diffraction data were collected to 1.6 Å resolution using synchrotron radiation at BL24XU of SPring-8.

  1. Co(III)EDTA as extra-cellular marker in μPIXE-analysis of rat cardiomyocytes

    International Nuclear Information System (INIS)

    Quaedackers, J.A.; Queens, R.M.G.J.; Mutsaers, P.H.A.; Voigt, M.J.A. de; Vusse, G.J. van der

    1998-01-01

    In previous studies no clear difference was found between the intra- and extra-cellular compartment in nuclear microprobe elemental distribution maps of freeze-dried cryo sections of heart tissue. Probably due to artefacts during the preparation of these samples, the intra-cellular and the extra-cellular content of elements are mixed up. In this article a method, using NaCo(III)EDTA as an extra-cellular marker, was applied to deconvolute the total ion content in an extra- and intra-cellular contribution. This method was both applied to normoxic heart tissue and low-flow ischemic heart tissue. Intra-cellular ion concentrations calculated from the corrected ion contents of the normoxic tissue agrees well with literature values. Moreover a clear elevation of the intra-cellular sodium and chlorine concentration was found in low-flow ischemic tissue. (orig.)

  2. Analysis of the Si(111) surface prepared in chemical vapor ambient for subsequent III-V heteroepitaxy

    International Nuclear Information System (INIS)

    Zhao, W.; Steidl, M.; Paszuk, A.; Brückner, S.; Dobrich, A.; Supplie, O.; Kleinschmidt, P.; Hannappel, T.

    2017-01-01

    Highlights: • We investigate the Si(111) surface prepared in CVD ambient at 1000 °C in 950 mbar H_2. • UHV-based XPS, LEED, STM and FTIR as well as ambient AFM are applied. • After processing the Si(111) surface is free of contamination and atomically flat. • The surface exhibits a (1 × 1) reconstruction and monohydride termination. • Wet-chemical pretreatment and homoepitaxy are required for a regular step structure. - Abstract: For well-defined heteroepitaxial growth of III-V epilayers on Si(111) substrates the atomic structure of the silicon surface is an essential element. Here, we study the preparation of the Si(111) surface in H_2-based chemical vapor ambient as well as its atomic structure after contamination-free transfer to ultrahigh vacuum (UHV). Applying complementary UHV-based techniques, we derive a complete picture of the atomic surface structure and its chemical composition. X-ray photoelectron spectroscopy measurements after high-temperature annealing confirm a Si surface free of any traces of oxygen or other impurities. The annealing in H_2 ambient leads to a monohydride surface termination, as verified by Fourier-transform infrared spectroscopy. Scanning tunneling microscopy confirms a well ordered, atomically smooth surface, which is (1 × 1) reconstructed, in agreement with low energy electron diffraction patterns. Atomic force microscopy reveals a significant influence of homoepitaxy and wet-chemical pretreatment on the surface morphology. Our findings show that wet-chemical pretreatment followed by high-temperature annealing leads to contamination-free, atomically flat Si(111) surfaces, which are ideally suited for subsequent III-V heteroepitaxy.

  3. Analysis of the Si(111) surface prepared in chemical vapor ambient for subsequent III-V heteroepitaxy

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, W.; Steidl, M.; Paszuk, A. [Technische Universität Ilmenau, Institut für Physik, 98693 Ilmenau (Germany); Brückner, S. [Technische Universität Ilmenau, Institut für Physik, 98693 Ilmenau (Germany); Helmholtz-Zentrum Berlin, Institut für Solare Brennstoffe, 14109 Berlin (Germany); Dobrich, A. [Technische Universität Ilmenau, Institut für Physik, 98693 Ilmenau (Germany); Supplie, O. [Technische Universität Ilmenau, Institut für Physik, 98693 Ilmenau (Germany); Helmholtz-Zentrum Berlin, Institut für Solare Brennstoffe, 14109 Berlin (Germany); Kleinschmidt, P. [Technische Universität Ilmenau, Institut für Physik, 98693 Ilmenau (Germany); Hannappel, T., E-mail: thomas.hannappel@tu-ilmenau.de [Technische Universität Ilmenau, Institut für Physik, 98693 Ilmenau (Germany); Helmholtz-Zentrum Berlin, Institut für Solare Brennstoffe, 14109 Berlin (Germany)

    2017-01-15

    Highlights: • We investigate the Si(111) surface prepared in CVD ambient at 1000 °C in 950 mbar H{sub 2}. • UHV-based XPS, LEED, STM and FTIR as well as ambient AFM are applied. • After processing the Si(111) surface is free of contamination and atomically flat. • The surface exhibits a (1 × 1) reconstruction and monohydride termination. • Wet-chemical pretreatment and homoepitaxy are required for a regular step structure. - Abstract: For well-defined heteroepitaxial growth of III-V epilayers on Si(111) substrates the atomic structure of the silicon surface is an essential element. Here, we study the preparation of the Si(111) surface in H{sub 2}-based chemical vapor ambient as well as its atomic structure after contamination-free transfer to ultrahigh vacuum (UHV). Applying complementary UHV-based techniques, we derive a complete picture of the atomic surface structure and its chemical composition. X-ray photoelectron spectroscopy measurements after high-temperature annealing confirm a Si surface free of any traces of oxygen or other impurities. The annealing in H{sub 2} ambient leads to a monohydride surface termination, as verified by Fourier-transform infrared spectroscopy. Scanning tunneling microscopy confirms a well ordered, atomically smooth surface, which is (1 × 1) reconstructed, in agreement with low energy electron diffraction patterns. Atomic force microscopy reveals a significant influence of homoepitaxy and wet-chemical pretreatment on the surface morphology. Our findings show that wet-chemical pretreatment followed by high-temperature annealing leads to contamination-free, atomically flat Si(111) surfaces, which are ideally suited for subsequent III-V heteroepitaxy.

  4. Near-atomic resolution analysis of BipD, a component of the type III secretion system of Burkholderia pseudomallei

    International Nuclear Information System (INIS)

    Pal, M.; Erskine, P. T.; Gill, R. S.; Wood, S. P.; Cooper, J. B.

    2010-01-01

    The type III secretion system needle-tip protein BipD has been crystallized in a form that diffracts X-rays to 1.5 Å resolution and the structure has been refined to an R factor of 16.1% and an R free of 19.8% at this resolution. The putative antiparallel dimer interface that was observed in earlier structures is conserved. Burkholderia pseudomallei, the causative agent of melioidosis, possesses a type III protein secretion apparatus that is similar to those found in Salmonella and Shigella. A major function of these secretion systems is to inject virulence-associated proteins into target cells of the host organism. The bipD gene of B. pseudomallei encodes a secreted virulence factor that is similar in sequence and is most likely to be functionally analogous to IpaD from Shigella and SipD from Salmonella. Proteins in this family are thought to act as extracellular chaperones at the tip of the secretion needle to help the hydrophobic translocator proteins enter the target cell membrane, where they form a pore and may also link the translocon pore with the secretion needle. BipD has been crystallized in a monoclinic crystal form that diffracted X-rays to 1.5 Å resolution and the structure was refined to an R factor of 16.1% and an R free of 19.8% at this resolution. The putative dimer interface that was observed in previous crystal structures was retained and a larger surface area was buried in the new crystal form

  5. Image properties of list mode likelihood reconstruction for a rectangular positron emission mammography with DOI measurements

    International Nuclear Information System (INIS)

    Qi, Jinyi; Klein, Gregory J.; Huesman, Ronald H.

    2000-01-01

    A positron emission mammography scanner is under development at our Laboratory. The tomograph has a rectangular geometry consisting of four banks of detector modules. For each detector, the system can measure the depth of interaction information inside the crystal. The rectangular geometry leads to irregular radial and angular sampling and spatially variant sensitivity that are different from conventional PET systems. Therefore, it is of importance to study the image properties of the reconstructions. We adapted the theoretical analysis that we had developed for conventional PET systems to the list mode likelihood reconstruction for this tomograph. The local impulse response and covariance of the reconstruction can be easily computed using FFT. These theoretical results are also used with computer observer models to compute the signal-to-noise ratio for lesion detection. The analysis reveals the spatially variant resolution and noise properties of the list mode likelihood reconstruction. The theoretical predictions are in good agreement with Monte Carlo results

  6. Expression, purification, crystallization and preliminary crystallographic analysis of MxiH, a subunit of the Shigella flexneri type III secretion system needle

    International Nuclear Information System (INIS)

    Deane, Janet E.; Cordes, Frank S.; Roversi, Pietro; Johnson, Steven; Kenjale, Roma; Picking, William D.; Picking, Wendy L.; Lea, Susan M.; Blocker, Ariel

    2006-01-01

    A monodisperse truncation mutant of MxiH, the subunit of the S. flexneri type III secretion system needle, has been crystallized. SeMet derivatives and a uranyl derivative have undergone preliminary crystallographic analysis. A monodisperse truncation mutant of MxiH, the subunit of the needle from the Shigella flexneri type III secretion system (TTSS), has been overexpressed and purified. Crystals were grown of native and selenomethionine-labelled MxiH CΔ5 and diffraction data were collected to 1.9 Å resolution. The crystals belong to space group C2, with unit-cell parameters a = 183.4, b = 28.1, c = 27.8 Å, β = 96.5°. An anomalous difference Patterson map calculated with the data from the SeMet-labelled crystals revealed a single peak on the Harker section v = 0. Inspection of a uranyl derivative also revealed one peak in the isomorphous difference Patterson map on the Harker section v = 0. Analysis of the self-rotation function indicates the presence of a twofold non-crystallographic symmetry axis approximately along a. The calculated Matthews coefficient is 1.9 Å 3 Da −1 for two molecules per asymmetric unit, corresponding to a solvent content of 33%

  7. Expression, purification, crystallization and preliminary crystallographic analysis of MxiH, a subunit of the Shigella flexneri type III secretion system needle

    Energy Technology Data Exchange (ETDEWEB)

    Deane, Janet E.; Cordes, Frank S.; Roversi, Pietro [Laboratory of Molecular Biophysics, Department of Biochemistry, University of Oxford (United Kingdom); Johnson, Steven [Laboratory of Molecular Biophysics, Department of Biochemistry, University of Oxford (United Kingdom); Sir William Dunn School of Pathology, University of Oxford (United Kingdom); Kenjale, Roma; Picking, William D.; Picking, Wendy L. [Department of Molecular Biosciences, University of Kansas (United States); Lea, Susan M., E-mail: susan.lea@biop.ox.ac.uk [Laboratory of Molecular Biophysics, Department of Biochemistry, University of Oxford (United Kingdom); Sir William Dunn School of Pathology, University of Oxford (United Kingdom); Blocker, Ariel [Sir William Dunn School of Pathology, University of Oxford (United Kingdom); Laboratory of Molecular Biophysics, Department of Biochemistry, University of Oxford (United Kingdom)

    2006-03-01

    A monodisperse truncation mutant of MxiH, the subunit of the S. flexneri type III secretion system needle, has been crystallized. SeMet derivatives and a uranyl derivative have undergone preliminary crystallographic analysis. A monodisperse truncation mutant of MxiH, the subunit of the needle from the Shigella flexneri type III secretion system (TTSS), has been overexpressed and purified. Crystals were grown of native and selenomethionine-labelled MxiH{sub CΔ5} and diffraction data were collected to 1.9 Å resolution. The crystals belong to space group C2, with unit-cell parameters a = 183.4, b = 28.1, c = 27.8 Å, β = 96.5°. An anomalous difference Patterson map calculated with the data from the SeMet-labelled crystals revealed a single peak on the Harker section v = 0. Inspection of a uranyl derivative also revealed one peak in the isomorphous difference Patterson map on the Harker section v = 0. Analysis of the self-rotation function indicates the presence of a twofold non-crystallographic symmetry axis approximately along a. The calculated Matthews coefficient is 1.9 Å{sup 3} Da{sup −1} for two molecules per asymmetric unit, corresponding to a solvent content of 33%.

  8. Predicting likelihood of seeking help through the employee assistance program among salaried and union hourly employees.

    Science.gov (United States)

    Delaney, W; Grube, J W; Ames, G M

    1998-03-01

    This research investigated belief, social support and background predictors of employee likelihood to use an Employee Assistance Program (EAP) for a drinking problem. An anonymous cross-sectional survey was administered in the home. Bivariate analyses and simultaneous equations path analysis were used to explore a model of EAP use. Survey and ethnographic research were conducted in a unionized heavy machinery manufacturing plant in the central states of the United States. A random sample of 852 hourly and salaried employees was selected. In addition to background variables, measures included: likelihood of going to an EAP for a drinking problem, belief the EAP can help, social support for the EAP from co-workers/others, belief that EAP use will harm employment, and supervisor encourages the EAP for potential drinking problems. Belief in EAP efficacy directly increased the likelihood of going to an EAP. Greater perceived social support and supervisor encouragement increased the likelihood of going to an EAP both directly and indirectly through perceived EAP efficacy. Black and union hourly employees were more likely to say they would use an EAP. Males and those who reported drinking during working hours were less likely to say they would use an EAP for a drinking problem. EAP beliefs and social support have significant effects on likelihood to go to an EAP for a drinking problem. EAPs may wish to focus their efforts on creating an environment where there is social support from coworkers and encouragement from supervisors for using EAP services. Union networks and team members have an important role to play in addition to conventional supervisor intervention.

  9. Association between class III obesity (BMI of 40-59 kg/m2 and mortality: a pooled analysis of 20 prospective studies.

    Directory of Open Access Journals (Sweden)

    Cari M Kitahara

    2014-07-01

    Full Text Available The prevalence of class III obesity (body mass index [BMI]≥40 kg/m2 has increased dramatically in several countries and currently affects 6% of adults in the US, with uncertain impact on the risks of illness and death. Using data from a large pooled study, we evaluated the risk of death, overall and due to a wide range of causes, and years of life expectancy lost associated with class III obesity.In a pooled analysis of 20 prospective studies from the United States, Sweden, and Australia, we estimated sex- and age-adjusted total and cause-specific mortality rates (deaths per 100,000 persons per year and multivariable-adjusted hazard ratios for adults, aged 19-83 y at baseline, classified as obese class III (BMI 40.0-59.9 kg/m2 compared with those classified as normal weight (BMI 18.5-24.9 kg/m2. Participants reporting ever smoking cigarettes or a history of chronic disease (heart disease, cancer, stroke, or emphysema on baseline questionnaires were excluded. Among 9,564 class III obesity participants, mortality rates were 856.0 in men and 663.0 in women during the study period (1976-2009. Among 304,011 normal-weight participants, rates were 346.7 and 280.5 in men and women, respectively. Deaths from heart disease contributed largely to the excess rates in the class III obesity group (rate differences = 238.9 and 132.8 in men and women, respectively, followed by deaths from cancer (rate differences = 36.7 and 62.3 in men and women, respectively and diabetes (rate differences = 51.2 and 29.2 in men and women, respectively. Within the class III obesity range, multivariable-adjusted hazard ratios for total deaths and deaths due to heart disease, cancer, diabetes, nephritis/nephrotic syndrome/nephrosis, chronic lower respiratory disease, and influenza/pneumonia increased with increasing BMI. Compared with normal-weight BMI, a BMI of 40-44.9, 45-49.9, 50-54.9, and 55-59.9 kg/m2 was associated with an estimated 6.5 (95% CI: 5.7-7.3, 8

  10. Chronic use of PAH-specific therapy in World Health Organization Group III Pulmonary Hypertension: a systematic review and meta-analysis.

    Science.gov (United States)

    Prins, Kurt W; Duval, Sue; Markowitz, Jeremy; Pritzker, Marc; Thenappan, Thenappan

    2017-03-01

    Pulmonary hypertension (PH) complicating chronic obstructive pulmonary disease (COPD-PH) and interstitial lung disease (ILD-PH) (World Health Organization [WHO] Group III PH) increases medical costs and reduces survival. Despite limited data, many clinicians are using pulmonary arterial hypertension (PAH)-specific therapy to treat WHO Group III PH patients. To further investigate the utility of PAH-specific therapy in WHO Group III PH, we performed a systematic review and meta-analysis. Relevant studies from January 2000 through May 2016 were identified in the MEDLINE, EMBASE, and COCHRANE electronic databases and www.clinicaltrials.gov. Change in six-minute walk distance (6MWD) was estimated using random effects meta-analysis techniques. Five randomized controlled trials (RCTs) in COPD-PH (128 placebo or standard treatment and 129 PAH-medication treated patients), two RCTs in ILD-PH (23 placebo and 46 treated patients), and four single-arm clinical trials (50 patients) in ILD-PH were identified. Treatment in both COPD-PH and ILD-PH did not worsen hypoxemia. Symptomatic burden was not consistently reduced but there were trends for reduced pulmonary artery pressures and pulmonary vascular resistance with PAH-specific therapy. As compared to placebo, 6MWD was not significantly improved with PAH-specific therapy in the five COPD-PH RCTs (42.7 m; 95% confidence interval [CI], -1.0 - 86.3). In the four single-arm studies in ILD-PH patients, there was a significant improvement in 6MWD after PAH-specific treatment (46.2 m; 95% CI, 27.9-64.4), but in the two ILD-PH RCTs there was not an improvement (21.6 m; 95% CI, -17.8 - 61.0) in exercise capacity when compared to placebo. Due to the small numbers of patients evaluated and inconsistent beneficial effects, the utility of PAH-specific therapy in WHO Group III PH remains unproven. A future clinical trial that is appropriately powered is needed to definitively determine the efficacy of this widely implemented treatment

  11. Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh

    2018-01-01

    Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.

  12. Comparison of likelihood testing procedures for parallel systems with covariances

    International Nuclear Information System (INIS)

    Ayman Baklizi; Isa Daud; Noor Akma Ibrahim

    1998-01-01

    In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes

  13. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  14. Planck 2013 results. XV. CMB power spectra and likelihood

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...

  15. Robust Gaussian Process Regression with a Student-t Likelihood

    NARCIS (Netherlands)

    Jylänki, P.P.; Vanhatalo, J.; Vehtari, A.

    2011-01-01

    This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have

  16. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  17. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  18. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  19. LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2004-01-01

    The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....

  20. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  1. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  2. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  3. Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.

    Science.gov (United States)

    McNeill, Brian W.; Stoltenberg, Cal D.

    1989-01-01

    Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…

  4. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    Science.gov (United States)

    Heesacker, Martin

    1986-01-01

    Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…

  5. Cases in which ancestral maximum likelihood will be confusingly misleading.

    Science.gov (United States)

    Handelman, Tomer; Chor, Benny

    2017-05-07

    Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.

  6. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  7. Pendeteksian Outlier pada Regresi Nonlinier dengan Metode statistik Likelihood Displacement

    Directory of Open Access Journals (Sweden)

    Siti Tabi'atul Hasanah

    2012-11-01

    Full Text Available Outlier is an observation that much different (extreme from the other observational data, or data can be interpreted that do not follow the general pattern of the model. Sometimes outliers provide information that can not be provided by other data. That's why outliers should not just be eliminated. Outliers can also be an influential observation. There are many methods that can be used to detect of outliers. In previous studies done on outlier detection of linear regression. Next will be developed detection of outliers in nonlinear regression. Nonlinear regression here is devoted to multiplicative nonlinear regression. To detect is use of statistical method likelihood displacement. Statistical methods abbreviated likelihood displacement (LD is a method to detect outliers by removing the suspected outlier data. To estimate the parameters are used to the maximum likelihood method, so we get the estimate of the maximum. By using LD method is obtained i.e likelihood displacement is thought to contain outliers. Further accuracy of LD method in detecting the outliers are shown by comparing the MSE of LD with the MSE from the regression in general. Statistic test used is Λ. Initial hypothesis was rejected when proved so is an outlier.

  8. Gaussian copula as a likelihood function for environmental models

    Science.gov (United States)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an

  9. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample

    Science.gov (United States)

    Alam, Shadab; Ata, Metin; Bailey, Stephen; Beutler, Florian; Bizyaev, Dmitry; Blazek, Jonathan A.; Bolton, Adam S.; Brownstein, Joel R.; Burden, Angela; Chuang, Chia-Hsun; Comparat, Johan; Cuesta, Antonio J.; Dawson, Kyle S.; Eisenstein, Daniel J.; Escoffier, Stephanie; Gil-Marín, Héctor; Grieb, Jan Niklas; Hand, Nick; Ho, Shirley; Kinemuchi, Karen; Kirkby, David; Kitaura, Francisco; Malanushenko, Elena; Malanushenko, Viktor; Maraston, Claudia; McBride, Cameron K.; Nichol, Robert C.; Olmstead, Matthew D.; Oravetz, Daniel; Padmanabhan, Nikhil; Palanque-Delabrouille, Nathalie; Pan, Kaike; Pellejero-Ibanez, Marcos; Percival, Will J.; Petitjean, Patrick; Prada, Francisco; Price-Whelan, Adrian M.; Reid, Beth A.; Rodríguez-Torres, Sergio A.; Roe, Natalie A.; Ross, Ashley J.; Ross, Nicholas P.; Rossi, Graziano; Rubiño-Martín, Jose Alberto; Saito, Shun; Salazar-Albornoz, Salvador; Samushia, Lado; Sánchez, Ariel G.; Satpathy, Siddharth; Schlegel, David J.; Schneider, Donald P.; Scóccola, Claudia G.; Seo, Hee-Jong; Sheldon, Erin S.; Simmons, Audrey; Slosar, Anže; Strauss, Michael A.; Swanson, Molly E. C.; Thomas, Daniel; Tinker, Jeremy L.; Tojeiro, Rita; Magaña, Mariana Vargas; Vazquez, Jose Alberto; Verde, Licia; Wake, David A.; Wang, Yuting; Weinberg, David H.; White, Martin; Wood-Vasey, W. Michael; Yèche, Christophe; Zehavi, Idit; Zhai, Zhongxu; Zhao, Gong-Bo

    2017-09-01

    We present cosmological results from the final galaxy clustering data set of the Baryon Oscillation Spectroscopic Survey, part of the Sloan Digital Sky Survey III. Our combined galaxy sample comprises 1.2 million massive galaxies over an effective area of 9329 deg2 and volume of 18.7 Gpc3, divided into three partially overlapping redshift slices centred at effective redshifts 0.38, 0.51 and 0.61. We measure the angular diameter distance DM and Hubble parameter H from the baryon acoustic oscillation (BAO) method, in combination with a cosmic microwave background prior on the sound horizon scale, after applying reconstruction to reduce non-linear effects on the BAO feature. Using the anisotropic clustering of the pre-reconstruction density field, we measure the product DMH from the Alcock-Paczynski (AP) effect and the growth of structure, quantified by fσ8(z), from redshift-space distortions (RSD). We combine individual measurements presented in seven companion papers into a set of consensus values and likelihoods, obtaining constraints that are tighter and more robust than those from any one method; in particular, the AP measurement from sub-BAO scales sharpens constraints from post-reconstruction BAOs by breaking degeneracy between DM and H. Combined with Planck 2016 cosmic microwave background measurements, our distance scale measurements simultaneously imply curvature ΩK = 0.0003 ± 0.0026 and a dark energy equation-of-state parameter w = -1.01 ± 0.06, in strong affirmation of the spatially flat cold dark matter (CDM) model with a cosmological constant (ΛCDM). Our RSD measurements of fσ8, at 6 per cent precision, are similarly consistent with this model. When combined with supernova Ia data, we find H0 = 67.3 ± 1.0 km s-1 Mpc-1 even for our most general dark energy model, in tension with some direct measurements. Adding extra relativistic species as a degree of freedom loosens the constraint only slightly, to H0 = 67.8 ± 1.2 km s-1 Mpc-1. Assuming flat

  10. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  11. Comorbidity and Karnofksy performance score are independent prognostic factors in stage III non-small-cell lung cancer: an institutional analysis of patients treated on four RTOG studies

    International Nuclear Information System (INIS)

    Firat, Selim; Byhardt, Roger W.; Gore, Elizabeth

    2002-01-01

    Purpose: To determine the prognostic role of comorbidity in Stage III non-small cell lung cancer (NSCLC) treated definitively with radiotherapy alone. Methods and Materials: A total of 112 patients with clinical Stage III NSCLC (American Joint Commission on Cancer 1997) enrolled in four Radiation Therapy Oncology Group studies (83-11, 84-03, 84-07, and 88-08 nonchemotherapy arms) at a single institution were analyzed retrospectively for overall survival (OS) and comorbidity. Of the 112 patients, 105 (94%) completed their assigned radiotherapy. The median assigned dose was 50.4 Gy to the lymphatics (range 45-50.4 Gy) and 70.2 Gy to the primary tumor (range 60-79.2 Gy). Comorbidity was rated retrospectively using the Cumulative Illness Rating Scale for Geriatrics (CIRS-G) and Charlson scales. Karnofsky performance scores (KPSs) and weight loss were prospectively recorded. Because only 8 patients had a KPS of 70). Results: The median survival was 10.39 months (range 7.87-12.91). The 2-, 3-, and 5-year OS rate was 20.5%, 12.5%, and 7.1%, respectively. On univariate analysis, clinical stage (IIIA vs. IIIB) was found to be a statistically significant factor influencing OS (p=0.026), and the histologic features, grade, tumor size as measured on CT scans, age, tobacco use, weight loss ≥5%, and total dose delivered to the primary tumor were not. A KPS of ≤70 (p=0.001), the presence of a CIRS-G score of 4 (extremely severe; p=0.0002), and a severity index of >2 (p 2 were independently associated with inferior OS; clinical tumor stage was not found to be an independent prognostic factor. Conclusion: KPS and comorbidity are important independent prognostic factors in Stage III NSCLC. Comorbidity should be included in protocols studying advanced stage NSCLC and used for stratification

  12. Quantifying the Establishment Likelihood of Invasive Alien Species Introductions Through Ports with Application to Honeybees in Australia.

    Science.gov (United States)

    Heersink, Daniel K; Caley, Peter; Paini, Dean R; Barry, Simon C

    2016-05-01

    The cost of an uncontrolled incursion of invasive alien species (IAS) arising from undetected entry through ports can be substantial, and knowledge of port-specific risks is needed to help allocate limited surveillance resources. Quantifying the establishment likelihood of such an incursion requires quantifying the ability of a species to enter, establish, and spread. Estimation of the approach rate of IAS into ports provides a measure of likelihood of entry. Data on the approach rate of IAS are typically sparse, and the combinations of risk factors relating to country of origin and port of arrival diverse. This presents challenges to making formal statistical inference on establishment likelihood. Here we demonstrate how these challenges can be overcome with judicious use of mixed-effects models when estimating the incursion likelihood into Australia of the European (Apis mellifera) and Asian (A. cerana) honeybees, along with the invasive parasites of biosecurity concern they host (e.g., Varroa destructor). Our results demonstrate how skewed the establishment likelihood is, with one-tenth of the ports accounting for 80% or more of the likelihood for both species. These results have been utilized by biosecurity agencies in the allocation of resources to the surveillance of maritime ports. © 2015 Society for Risk Analysis.

  13. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan

    2017-07-13

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  14. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan; Genton, Marc G.

    2017-01-01

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  15. Survival and human papillomavirus in oropharynx cancer in TAX 324: a subset analysis from an international phase III trial.

    Science.gov (United States)

    Posner, M R; Lorch, J H; Goloubeva, O; Tan, M; Schumaker, L M; Sarlis, N J; Haddad, R I; Cullen, K J

    2011-05-01

    The association between human papillomavirus (HPV) and overall survival (OS) in oropharynx cancer (OPC) was retrospectively examined in TAX 324, a phase III trial of sequential therapy for locally advanced head and neck cancer. Accrual for TAX 324 was completed in 2003 and data updated through 2008. Pretherapy tumor biopsies were studied by PCR for human papillomavirus type 16 and linked to OS, progression-free survival (PFS) and demographics. Of 264 patients with OPC, 111 (42%) had evaluable biopsies; 56 (50%) were HPV+ and 55 (50%) were HPV-. HPV+ patients were significantly younger (54 versus 58 years, P = 0.02), had T1/T2 primary cancers (49% versus 20%, P = 0.001), and had a performance status of zero (77% versus 49%, P = 0.003). OS and PFS were better for HPV+ patients (OS, hazard ratio = 0.20, P < 0.0001). Local-regional failure was less in HPV+ patients (13% versus 42%, P = 0.0006); at 5 years, 82% of HPV+ patients were alive compared with 35% of HPV- patients (P < 0.0001). HPV+ OPC has a different biology compared with HPV- OPC; 5-year OS, PFS, and local-regional control are unprecedented. These results support the possibility of selectively reducing therapy and long-term morbidity in HPV+ OPC while preserving survival and approaching HPV- disease with more aggressive treatment.

  16. S/sub n/ analysis of the TRX metal lattices with ENDF/B version III data

    International Nuclear Information System (INIS)

    Wheeler, F.J.

    1975-01-01

    Two critical assemblies, designated as thermal-reactor benchmarks TRX-1 and TRX-2 for ENDF/B data testing, were analyzed using the one-dimensional S/sub n/-theory code SCAMP. The two assemblies were simple lattices of aluminum-clad, uranium-metal fuel rods in triangular arrays with D 2 O as moderator and reflector. The fuel was low-enriched (1.3 percent 235 U), 0.387-inch in diameter and had an active height of 48 inches. The volume ratio of water to uranium was 2.35 for the TRX-1 lattice and 4.02 for TRX-2. Full-core S/sub n/ calculations based on Version III data were performed for these assemblies and the results obtained were compared with the measured values of the multiplication factors, the ratio of epithermal-to-thermal neutron capture in 238 U, the ratio of epithermal-to-thermal fission in 235 U, the ratio of 238 U fission to 235 U fission, and the ratio of capture in 238 U to fission in 235 U. Reaction rates were obtained from a central region of the full-core problems. Multigroup cross sections for the reactor calculation were obtained from S/sub n/ cell calculations with resonance self-shielding calculated using the RABBLE treatment. The results of the analyses are generally consistent with results obtained by other investigators

  17. A DEEP CHANDRA ACIS STUDY OF NGC 4151. III. THE LINE EMISSION AND SPECTRAL ANALYSIS OF THE IONIZATION CONE

    International Nuclear Information System (INIS)

    Wang, Junfeng; Fabbiano, Giuseppina; Elvis, Martin; Risaliti, Guido; Karovska, Margarita; Zezas, Andreas; Mundell, Carole G.; Dumas, Gaelle; Schinnerer, Eva

    2011-01-01

    This paper is the third in a series in which we present deep Chandra ACIS-S imaging spectroscopy of the Seyfert 1 galaxy NGC 4151, devoted to study its complex circumnuclear X-ray emission. Emission features in the soft X-ray spectrum of the bright extended emission (L 0.3-2 k eV ∼ 10 40 erg s –1 ) at r > 130 pc (2'') are consistent with blended brighter O VII, O VIII, and Ne IX lines seen in the Chandra HETGS and XMM-Newton RGS spectra below 2 keV. We construct emission line images of these features and find good morphological correlations with the narrow-line region clouds mapped in [O III] λ5007. Self-consistent photoionization models provide good descriptions of the spectra of the large-scale emission, as well as resolved structures, supporting the dominant role of nuclear photoionization, although displacement of optical and X-ray features implies a more complex medium. Collisionally ionized emission is estimated to be ∼ ☉ yr –1 at 130 pc and the kinematic power of the ionized outflow is 1.7 × 10 41 erg s –1 , approximately 0.3% of the bolometric luminosity of the active nucleus in NGC 4151.

  18. Complexes of lanthanum(III), cerium(III), samarium(III) and dysprosium(III) with substituted piperidines

    Energy Technology Data Exchange (ETDEWEB)

    Manhas, B S; Trikha, A K; Singh, H; Chander, M

    1983-11-01

    Complexes of the general formulae M/sub 2/Cl/sub 6/(L)/sub 3/.C/sub 2/H/sub 5/OH and M/sub 2/(NO/sub 3/)/sub 6/(L)/sub 2/.CH/sub 3/OH have been synthesised by the reactions of chlorides and nitrates of La(III), Ce(III), Sm(III) and Dy(III) with 2-methylpiperidine, 3-methylpiperidine and 4-methylpiperidine. These complexes have been characterised on the basis of their elemental analysis, and IR and electronic reflectance spectra. IR spectral data indicate the presence of coordinated ethanol and methanol molecules and bidentate nitrate groups. Coordination numbers of the metal ions vary from 5 to 8. 19 refs.

  19. Direct detection of dark matter with the EDELWEISS-III experiment: signals induced by charge trapping, data analysis and characterization of cryogenic detector sensitivity to low-mass WIMPs

    International Nuclear Information System (INIS)

    Arnaud, Quentin

    2015-01-01

    The EDELWEISS-III experiment is dedicated to direct dark matter searches aiming at detecting WIMPS. These massive particles should account for more than 80% of the mass of the Universe and be detectable through their elastic scattering on nuclei constituting the absorber of a detector. As the expected WIMP event rate is extremely low ( 20 GeV). Finally, a study dedicated to the optimization of solid cryogenic detectors to low mass WIMP searches is presented. This study is performed on simulated data using a statistical test based on a profiled likelihood ratio that allows for statistical background subtraction and spectral shape discrimination. This study combined with results from Run308, has lead the EDELWEISS experiment to favor low mass WIMP searches ( [fr

  20. The Effect of early physiotherapy on the recovery of mandibular function after orthognathic surgery for Class III correction: part I--jaw-motion analysis.

    Science.gov (United States)

    Teng, Terry Te-Yi; Ko, Ellen Wen-Ching; Huang, Chiung Shing; Chen, Yu-Ray

    2015-01-01

    The aim of this prospective study was to compare the mandibular range of motion in Class III patients with and without early physiotherapy after orthognathic surgery (OGS). This study consisted of 63 Class III patients who underwent 2-jaw OGS. The experimental group comprised 31 patients who received early systematic physical rehabilitation. The control group consisted of 32 patients who did not have physical rehabilitation. Twelve variables of 3-dimensional (3D) jaw-motion analysis (JMA) were recorded before surgery (T1) and 6 weeks (T2) and 6 months (T3) after surgery. A 2-sample t test was conducted to compare the JMA results between the two groups at different time points. At T2, the JMA data were measured to be 77.5%-145.7% of presurgical values in the experimental group, and 60.3%-90.6% in the control group. At T3, the measurements were 112.2%-179.2% of presurgical values in the experimental group, and 77.6%-157.2% in the control group. The patients in the experimental group exhibited more favorable recovery than did those in the control group, from T1 to T2 and T1 to T3. However, after termination of physiotherapy, no significant difference in the extent of recovery was observed between groups up to 6 months after OGS. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. Effective representation of amide III, II, I, and A modes on local vibrational modes: Analysis of ab initio quantum calculation results.

    Science.gov (United States)

    Hahn, Seungsoo

    2016-10-28

    The Hamiltonian matrix for the first excited vibrational states of a protein can be effectively represented by local vibrational modes constituting amide III, II, I, and A modes to simulate various vibrational spectra. Methods for obtaining the Hamiltonian matrix from ab initio quantum calculation results are discussed, where the methods consist of three steps: selection of local vibrational mode coordinates, calculation of a reduced Hessian matrix, and extraction of the Hamiltonian matrix from the Hessian matrix. We introduce several methods for each step. The methods were assessed based on the density functional theory calculation results of 24 oligopeptides with four different peptide lengths and six different secondary structures. The completeness of a Hamiltonian matrix represented in the reduced local mode space is improved by adopting a specific atom group for each amide mode and reducing the effect of ignored local modes. The calculation results are also compared to previous models using C=O stretching vibration and transition dipole couplings. We found that local electric transition dipole moments of the amide modes are mainly bound on the local peptide planes. Their direction and magnitude are well conserved except amide A modes, which show large variation. Contrary to amide I modes, the vibrational coupling constants of amide III, II, and A modes obtained by analysis of a dipeptide are not transferable to oligopeptides with the same secondary conformation because coupling constants are affected by the surrounding atomic environment.

  2. Fatigue crack growth in mixed mode I+III+III non proportional loading conditions in a 316 stainless steel, experimental analysis and modelization of the effects of crack tip plasticity

    International Nuclear Information System (INIS)

    Fremy, F.

    2012-01-01

    This thesis deals with fatigue crack growth in non-proportional variable amplitude mixed mode I + II + III loading conditions and analyses the effects of internal stresses stemming from the confinement of the plastic zone in small scale yielding conditions. The tests showed that there are antagonistic long-distance and short-distance effects of the loading history on fatigue crack growth. The shape of loading path, and not only the maximum and minimum values in this path, is crucial and, by comparison, the effects of contact and friction are of lesser importance. Internal stresses play a major role on the fatigue crack growth rate and on the crack path. An approach was developed to analyze the elastic-plastic behavior of a representative section of the crack front using the FEA. A model reduction technic is used to extract the relevant information from the FE results. To do so, the velocity field is partitioned into mode I, II, III elastic and plastic components, each component being characterized by an intensity factor and a fixed spatial distribution. The calculations were used to select seven loading paths in I + II and I + II + III mixed mode conditions, which all have the same amplitudes for each mode, the same maximum, minimum and average values. These paths are supposed to be equivalent in the sense of common failure criteria, but differ significantly when the elastic-plastic behavior of the material is accounted for. The results of finite element simulations and of simulations using a simplified model proposed in this thesis are both in agreement with experimental results. The approach was also used to discuss the role of mode III loading steps. Since the material behavior is nonlinear, the nominal loading direction does not coincide with the plastic flow direction. Adding a mode III loading step in a mode I+II fatigue cycle, may, in some cases, significantly modify the behaviour of the crack (crack growth rate, crack path and plastic flow). (author)

  3. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  4. Crystallization and preliminary X-ray crystallographic analysis of enoyl-ACP reductase III (FabL) from Bacillus subtilis

    International Nuclear Information System (INIS)

    Kim, Kook-Han; Park, Joon Kyu; Ha, Byung Hak; Moon, Jin Ho; Kim, Eunice EunKyeong

    2007-01-01

    Enoyl-ACP reductase III (FabL) from B. subtilis has been overexpressed, purified and crystallized. The crystal belongs to space group P622, with unit-cell parameters a = b = 139.56, c = 62.75 Å, α = β = 90, γ = 120°, and data were collected to 2.5 Å resolution using synchrotron radiation. Enoyl-[acyl-carrier protein] reductase (enoyl-ACP reductase; ENR) is a key enzyme in type II fatty-acid synthase that catalyzes the last step in each elongation cycle. It has been considered as an antibiotic target since it is an essential enzyme in bacteria. However, recent studies indicate that some pathogens have more than one ENR. Bacillus subtilis is reported to have two ENRs, namely BsFabI and BsFabL. While BsFabI is similar to other FabIs, BsFabL shows very little sequence similarity and is NADPH-dependent instead of NADH-dependent as in the case of FabI. In order to understand these differences on a structural basis, BsFabL has been cloned, expressed and and crystallized. The crystal belongs to space group P622, with unit-cell parameters a = b = 139.56, c = 62.75 Å, α = β = 90, γ = 120° and one molecule of FabL in the asymmetric unit. Data were collected using synchrotron radiation (beamline 4A at the Pohang Light Source, Korea). The crystal diffracted to 2.5 Å resolution

  5. Nasal changes after orthognathic surgery for patients with prognathism and Class III malocclusion: analysis using three-dimensional photogrammetry.

    Science.gov (United States)

    Worasakwutiphong, Saran; Chuang, Ya-Fang; Chang, Hsin-Wen; Lin, Hsiu-Hsia; Lin, Pei-Ju; Lo, Lun-Jou

    2015-02-01

    Orthognathic surgery alters the position of maxilla and mandible, and consequently changes the nasal shape. The nasal change remains a concern to Asian patients. The aim of this study was to measure the nasal changes using a novel three-dimensional photographic imaging method. A total of 38 patients with Class III malocclusion and prognathism were enrolled. All patients underwent two-jaw surgery with the standard technique. A nasal alar cinching suture was included at the end of procedure. Facial landmarks and nasal morphology were defined and measured from pre- and postoperative three-dimensional photographic images. Intra-rater errors on landmark identification were controlled. Patient's reports of perceptual nasal changes were recorded. The average width of the alar base and subalare remained similar after surgery. Alar width was increased by 0.74 mm. Nasal height and length remained the same. Nasolabial angle increased significantly. The area of nostril show revealed a significant increase and was correlated with a decrease of columella inclination. Nasal tip projection decreased significantly, by 1.99 mm. Preoperative nasal morphology was different between patients with and without cleft lip/palate, but most nasal changes were concordant. In the self-perception, 37% of patients reported improved nasal appearance, 58% reported no change, and 5% were not satisfied with the nasal changes. After the surgery, characteristic nasal changes occurred with an increase of nasolabial angle and nostril show, but a preserved nasal width. The majority of patients did not perceive adverse nasal changes. Copyright © 2014. Published by Elsevier B.V.

  6. Mutation Analysis of 16 Mucolipidosis II and III Alpha/Beta Chinese Children Revealed Genotype-Phenotype Correlations.

    Directory of Open Access Journals (Sweden)

    Shuang Liu

    Full Text Available Mucolipidosis II and III alpha/beta are autosomal recessive diseases caused by mutations in the GNPTAB gene which encodes the α and β subunits of the N-acetylglucosamine-1-phosphotransferase. Clinically, mucolipidosis II (MLII is characterized by severe developmental delay, coarse facial features, skeletal deformities, and other systemic involvement. In contrast, MLIII alpha/beta is a much milder disorder, the symptoms of which include progressive joint stiffness, short stature, and scoliosis. To study the relationship between the genotypes and phenotypes of the MLII and MLIII alpha/beta patients, we analyzed the GNPTAB gene in 16 Chinese MLII and MLIII alpha/beta patients. We collected and analyzed the patients' available clinical data and all showed clinical features typical of MLII or MLIII alpha/beta. Moreover, the activity of several lysosomal enzymes was measured in the plasma and finally the GNPTAB gene was sequenced. We detected 30 mutant alleles out of 32 alleles in our patients. These include 10 new mutations (c.99delC, c.118-1G>A, c.523_524delAAinsG, c.1212C>G, c.2213C>A, c.2345C>T, c.2356C>T, c.2455G>T, c.2821dupA, and c.3136-2A>G and 5 previously reported mutations (c.1071G>A, c.1090C>T, c.2715+1G>A, c.2550_2554delGAAA, and c.3613C>T. The most frequent mutation was the splicing mutation c.2715+1G>A, which accounted for 28% of the mutations. The majority of the mutations reported in the Chinese patients (57% were located on exon 13 or in its intronic flanking regions.

  7. [Construction and prokaryotic expression of recombinant gene EGFRvIII HBcAg and immunogenicity analysis of the fusion protein].

    Science.gov (United States)

    Duan, Xiao-yi; Wang, Jian-sheng; Guo, You-min; Han, Jun-li; Wang, Quan-ying; Yang, Guang-xiao

    2007-01-01

    To construct recombinant prokaryotic expression plasmid pET28a(+)/c-PEP-3-c and evaluate the immunogenicity of the fusion protein. cDNA fragment encoding PEP-3 was obtained from pGEM-T Easy/PEP-3 and inserted into recombinant plasmid pGEMEX/HBcAg. Then it was subcloned in prokaryotic expression vector and transformed into E.coli BL21(DE3). The fusion protein was expressed by inducing IPTG and purified by Ni(2+)-NTA affinity chromatography. BALB/c mice were immunized with fusion protein and the antibody titre was determined by indirect ELISA. The recombinant gene was confirmed to be correct by restriction enzyme digestion and DNA sequencing. After prokaryotic expression, fusion protein existed in sediment and accounted for 56% of all bacterial lysate. The purified product accounted for 92% of all protein and its concentration was 8 g/L. The antibody titre in blood serum reached 1:16 000 after the fourth immunization and reached 1:2.56x10(5) after the sixth immunization. The titre of anti-PEP-3 antibody reached 1:1.28x10(5) and the titre of anti-HBcAg antibody was less than 1:4x10(3). Fusion gene PEP-3-HBcAg is highly expressed in E.coli BL21. The expressed fusion protein can induce neutralizing antibody with high titer and specificity, which lays a foundation for the study of genetically engineering vaccine for malignant tumors with the high expression of EGFRvIII.

  8. Resource Use and Costs of Dengue: Analysis of Data from Phase III Efficacy Studies of a Tetravalent Dengue Vaccine.

    Science.gov (United States)

    El Fezzazi, Hanna; Branchu, Marie; Carrasquilla, Gabriel; Pitisuttithum, Punnee; Perroud, Ana Paula; Frago, Carina; Coudeville, Laurent

    2017-12-01

    A tetravalent dengue vaccine (CYD-TDV) has recently been approved in 12 countries in southeast Asia and Latin America for individuals aged 9-45 years or 9-60 years (age indication approvals vary by country) living in endemic areas. Data on utilization of medical and nonmedical resources as well as time lost from school and work were collected during the active phase of two phase III efficacy studies performed in 10 countries in the Asia-Pacific region and Latin America (NCT01373281; NCT01374516). We compared dengue-related resource utilization and costs among vaccinated and nonvaccinated participants. Country-specific unit costs were derived from available literature. There were 901 virologically confirmed dengue episodes among participants aged ≥ 9 years ( N = 25,826): corresponding to 373 episodes in the CYD-TDV group ( N = 17,230) and 528 episodes in the control group ( N = 8,596). Fewer episodes in the CYD-TDV group resulted in hospitalization than in the control group (7.0% versus 13.3%; P = 0.002), but both had a similar average length of stay of 4 days. Overall, a two-thirds reduction in resource consumption and missed school/work days was observed in the CYD-TDV group relative to the control group. The estimated direct and indirect cost (2014 I$) associated with dengue episodes per participant in the CYD-TDV group was 73% lower than in the control group (I$6.72 versus I$25.08); representing a saving of I$I8.36 (95% confidence interval [CI]:17.05-19.78) per participant with vaccination. This is the first study providing information on dengue costs among vaccinated individuals and direct confirmation that vaccination has the potential to reduce dengue illness costs.

  9. The evaluation of failure stress and released amount of fission product gas of power ramped rod by fuel behaviour analysis code 'FEMAXI-III'

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki; Fujita, Misao

    1984-01-01

    Pellet-Cladding Interaction(PCI) related in-pile failure of Zircaloy sheathed fuel rod is in general considered to be caused by combination of pellet-cladding mechanical interaction(PCMI) with fuel-cladding chemical interaction(FCCI). An understanding of a basic mechanism of PCI-related fuel failure is therefore necessary to get actual cladding hoop stress from mechanical interaction and released amounts of fission product(FP) gas of aggressive environmental agency from chemical interaction. This paper describes results of code analysis performed on fuel failure to cladding hoop stress and amounts of FP gas released under the condition associated with power ramping. Data from Halden(HBWR) and from Studsvik(R2) are used for code analysis. The fuel behaviour analysis code ''FEMAXI-III'' is used as an analytical tool. The followings are revealed from the study: (1) PCI-related fuel failure is dependent upon cladding hoop stress and released amounts of FP gas at power ramping. (2) Preliminary calculated threshold values of hoop stress and of released amounts of FP gas to PCI failure are respectively 330MPa, 10% under the Halden condition, 190MPa, 5% under the Inter ramp(BWR) condition, and 270MPa, 14% under the Over ramp(PWR) condition. The values of hoop stress calculated are almost in the similar range of those obtained from ex-reactor PCI simulated tests searched from references published. (3) The FEMAXI-III code verification is made in mechanical manner by using in-pile deformation data(diametral strain) obtained from power ramping test undertaken by JAERI. While, the code verification is made in thermal manner by using punctured FP gas data obtained from post irradiation examination performed on non-defected power ramped fuel rods. The calculations are resulted in good agreements to both, mechanical and thermal experimental data suggesting the validity of the code evaluation. (J.P.N.)

  10. Cardiac Toxicity After Radiotherapy for Stage III Non–Small-Cell Lung Cancer: Pooled Analysis of Dose-Escalation Trials Delivering 70 to 90 Gy

    Science.gov (United States)

    Eblan, Michael J.; Deal, Allison M.; Lipner, Matthew; Zagar, Timothy M.; Wang, Yue; Mavroidis, Panayiotis; Lee, Carrie B.; Jensen, Brian C.; Rosenman, Julian G.; Socinski, Mark A.; Stinchcombe, Thomas E.; Marks, Lawrence B.

    2017-01-01

    Purpose The significance of radiotherapy (RT) –associated cardiac injury for stage III non–small-cell lung cancer (NSCLC) is unclear, but higher heart doses were associated with worse overall survival in the Radiation Therapy Oncology Group (RTOG) 0617 study. We assessed the impact of heart dose in patients treated at our institution on several prospective dose-escalation trials. Patients and Methods From 1996 to 2009, 127 patients with stage III NSCLC (Eastern Cooperative Oncology Group performance status, 0 to 1) received dose-escalated RT to 70 to 90 Gy (median, 74 Gy) in six trials. RT plans and cardiac doses were reviewed. Records were reviewed for the primary end point: symptomatic cardiac events (symptomatic pericardial effusion, acute coronary syndrome, pericarditis, significant arrhythmia, and heart failure). Cardiac risk was assessed by noting baseline coronary artery disease and calculating the WHO/International Society of Hypertension score. Competing risks analysis was used. Results In all, 112 patients were analyzed. Median follow-up for surviving patients was 8.8 years. Twenty-six patients (23%) had one or more events at a median of 26 months to first event (effusion [n = 7], myocardial infarction [n = 5], unstable angina [n = 3], pericarditis [n = 2], arrhythmia [n = 12], and heart failure [n = 1]). Heart doses (eg, heart mean dose; hazard ratio, 1.03/Gy; P = .002,), coronary artery disease (P < .001), and WHO/International Society of Hypertension score (P = .04) were associated with events on univariable analysis. Heart doses remained significant on multivariable analysis that accounted for baseline risk. Two-year competing risk–adjusted event rates for patients with heart mean dose < 10 Gy, 10 to 20 Gy, or ≥ 20 Gy were 4%, 7%, and 21%, respectively. Heart doses were not associated with overall survival. Conclusion Cardiac events were relatively common after high-dose thoracic RT and were independently associated with both heart dose and

  11. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  12. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.

    Science.gov (United States)

    Karabatsos, George

    2018-06-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.

  13. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  14. A Comprehensive Analysis in Terms of Molecule-Intrinsic, Quasi-Atomic Orbitals. III. The Covalent Bonding Structure of Urea.

    Science.gov (United States)

    West, Aaron C; Schmidt, Michael W; Gordon, Mark S; Ruedenberg, Klaus

    2015-10-15

    The analysis of molecular electron density matrices in terms of quasi-atomic orbitals, which was developed in previous investigations, is quantitatively exemplified by a detailed application to the urea molecule. The analysis is found to identify strong and weak covalent bonding interactions as well as intramolecular charge transfers. It yields a qualitative as well as quantitative ab initio description of the bonding structure of this molecule, which raises questions regarding some traditional rationalizations.

  15. A composite likelihood approach for spatially correlated survival data

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  16. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  17. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  18. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    Energy Technology Data Exchange (ETDEWEB)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  19. Deformation of log-likelihood loss function for multiclass boosting.

    Science.gov (United States)

    Kanamori, Takafumi

    2010-09-01

    The purpose of this paper is to study loss functions in multiclass classification. In classification problems, the decision function is estimated by minimizing an empirical loss function, and then, the output label is predicted by using the estimated decision function. We propose a class of loss functions which is obtained by a deformation of the log-likelihood loss function. There are four main reasons why we focus on the deformed log-likelihood loss function: (1) this is a class of loss functions which has not been deeply investigated so far, (2) in terms of computation, a boosting algorithm with a pseudo-loss is available to minimize the proposed loss function, (3) the proposed loss functions provide a clear correspondence between the decision functions and conditional probabilities of output labels, (4) the proposed loss functions satisfy the statistical consistency of the classification error rate which is a desirable property in classification problems. Based on (3), we show that the deformed log-likelihood loss provides a model of mislabeling which is useful as a statistical model of medical diagnostics. We also propose a robust loss function against outliers in multiclass classification based on our approach. The robust loss function is a natural extension of the existing robust loss function for binary classification. A model of mislabeling and a robust loss function are useful to cope with noisy data. Some numerical studies are presented to show the robustness of the proposed loss function. A mathematical characterization of the deformed log-likelihood loss function is also presented. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...