WorldWideScience

Sample records for previous theoretical estimates

  1. Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue

    Science.gov (United States)

    González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.

    2013-04-01

    Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.

  2. Potential benefits of remote sensing: Theoretical framework and empirical estimate

    Science.gov (United States)

    Eisgruber, L. M.

    1972-01-01

    A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.

  3. Theoretical estimates of maximum fields in superconducting resonant radio frequency cavities: stability theory, disorder, and laminates

    Science.gov (United States)

    Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.

    2017-03-01

    Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.

  4. Theoretical and experimental estimates of the Peierls stress

    CSIR Research Space (South Africa)

    Nabarro, FRN

    1997-03-01

    Full Text Available - sidered in its original derivation. It is argued that the conditions of each type of experiment determine whether the P-N or the H formula is appropriate. ? 2. THEORETICAL Peierls's original estimate was based on a simple cubic lattice... with elastic isotropy and Poisson's ratio v. The result was (T z 20p exp [-47r/( 1 - v)]. (1) This value is so small that a detailed discussion of its accuracy would be point- Nabarro (1947) corrected an algebraic error in Peierls's calculation...

  5. Information theoretic quantification of diagnostic uncertainty.

    Science.gov (United States)

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  6. Theoretical estimates of spherical and chromatic aberration in photoemission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Fitzgerald, J.P.S., E-mail: fit@pdx.edu; Word, R.C.; Könenkamp, R.

    2016-01-15

    We present theoretical estimates of the mean coefficients of spherical and chromatic aberration for low energy photoemission electron microscopy (PEEM). Using simple analytic models, we find that the aberration coefficients depend primarily on the difference between the photon energy and the photoemission threshold, as expected. However, the shape of the photoelectron spectral distribution impacts the coefficients by up to 30%. These estimates should allow more precise correction of aberration in PEEM in experimental situations where the aberration coefficients and precise electron energy distribution cannot be readily measured. - Highlights: • Spherical and chromatic aberration coefficients of the accelerating field in PEEM. • Compact, analytic expressions for coefficients depending on two emission parameters. • Effect of an aperture stop on the distribution is also considered.

  7. Theoretical estimation and validation of radiation field in alkaline hydrolysis plant

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Sanjay; Krishnamohanan, T.; Gopalakrishnan, R.K., E-mail: singhs@barc.gov.in [Radiation Safety Systems Division, Bhabha Atomic Research Centre, Mumbai (India); Anand, S. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai (India); Pancholi, K. C. [Waste Management Division, Bhabha Atomic Research Centre, Mumbai (India)

    2014-07-01

    Spent organic solvent (30% TBP + 70% n-Dodecane) from reprocessing facility is treated at ETP in Alkaline Hydrolysis Plant (AHP) and Organic Waste Incineration (ORWIN) Facility. In AHP-ORWIN, there are three horizontal cylindrical tanks having 2.0 m{sup 3} operating capacity used for waste storage and transfer. The three tanks are, Aqueous Waste Tank (AWT), Waste Receiving Tank (WRT) and Dodecane Waste Tank (DWT). These tanks are en-housed in a shielded room in this facility. Monte Carlo N-Particle (MCNP) radiation transport code was used to estimate ambient radiation field levels when the storage tanks are having hold up volumes of desired specific activity levels. In this paper the theoretically estimated values of radiation field is compared with the actual measured dose.

  8. Multifractal rainfall extremes: Theoretical analysis and practical estimation

    International Nuclear Information System (INIS)

    Langousis, Andreas; Veneziano, Daniele; Furcolo, Pierluigi; Lepore, Chiara

    2009-01-01

    We study the extremes generated by a multifractal model of temporal rainfall and propose a practical method to estimate the Intensity-Duration-Frequency (IDF) curves. The model assumes that rainfall is a sequence of independent and identically distributed multiplicative cascades of the beta-lognormal type, with common duration D. When properly fitted to data, this simple model was found to produce accurate IDF results [Langousis A, Veneziano D. Intensity-duration-frequency curves from scaling representations of rainfall. Water Resour Res 2007;43. (doi:10.1029/2006WR005245)]. Previous studies also showed that the IDF values from multifractal representations of rainfall scale with duration d and return period T under either d → 0 or T → ∞, with different scaling exponents in the two cases. We determine the regions of the (d, T)-plane in which each asymptotic scaling behavior applies in good approximation, find expressions for the IDF values in the scaling and non-scaling regimes, and quantify the bias when estimating the asymptotic power-law tail of rainfall intensity from finite-duration records, as was often done in the past. Numerically calculated exact IDF curves are compared to several analytic approximations. The approximations are found to be accurate and are used to propose a practical IDF estimation procedure.

  9. Robust Fault Estimation Design for Discrete-Time Nonlinear Systems via A Modified Fuzzy Fault Estimation Observer.

    Science.gov (United States)

    Xie, Xiang-Peng; Yue, Dong; Park, Ju H

    2018-02-01

    The paper provides relaxed designs of fault estimation observer for nonlinear dynamical plants in the Takagi-Sugeno form. Compared with previous theoretical achievements, a modified version of fuzzy fault estimation observer is implemented with the aid of the so-called maximum-priority-based switching law. Given each activated switching status, the appropriate group of designed matrices can be provided so as to explore certain key properties of the considered plants by means of introducing a set of matrix-valued variables. Owing to the reason that more abundant information of the considered plants can be updated in due course and effectively exploited for each time instant, the conservatism of the obtained result is less than previous theoretical achievements and thus the main defect of those existing methods can be overcome to some extent in practice. Finally, comparative simulation studies on the classical nonlinear truck-trailer model are given to certify the benefits of the theoretic achievement which is obtained in our study. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    Science.gov (United States)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  11. Prediction of RNA secondary structure using generalized centroid estimators.

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi

    2009-02-15

    Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.

  12. A theoretical signal processing framework for linear diffusion MRI: Implications for parameter estimation and experiment design.

    Science.gov (United States)

    Varadarajan, Divya; Haldar, Justin P

    2017-11-01

    The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Theoretical and experimental estimation of the lead equivalent for some materials used in finishing of diagnostic x-ray rooms in Syria

    International Nuclear Information System (INIS)

    Shwekani, R.; Suman, H.; Takeyeddin, M.; Suleiman, J.

    2003-11-01

    This work aimed at estimating the lead equivalent values for finishing materials, which are frequently used in Syria. These materials are ceramic and marble. In the past, many studies were performed to estimate the lead equivalent values for different types of bricks, which are widely used in Syria. Therefore, this work could be considered as a follow up in order to be able to estimate the structural shielding of diagnostic X-ray rooms and accurately perform the shielding calculations to reduce unnecessary added shields. The work was done in two ways, theoretical using MCNP computer code and experimental in the secondary standard laboratory. The theoretical work was focused on generalizing the results scope to cover the real existing variations in the structure of the materials used in the finishing or the variations in the X-ray machines. Therefore, quantifying different sources of errors were strongly focused on using the methodology of sensitivity analysis. While, the experiment measurements were performed to make sure that their results will be within the error range produced by the theoretical study. The obtained results showed a strong correlation between theoretical and experimental data. (author)

  14. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate

    Science.gov (United States)

    Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.

    2017-08-01

    The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.

  15. Multi-channel PSD Estimators for Speech Dereverberation

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Gerkmann, Timo

    2015-01-01

    densities (PSDs). We first derive closed-form expressions for the mean square error (MSE) of both PSD estimators and then show that one estimatorpreviously used for speech dereverberation by the authors – always yields a better MSE. Only in the case of a two microphone array or for special spatial...... distributions of the interference both estimators yield the same MSE. The theoretically derived MSE values are in good agreement with numerical simulation results and with instrumental speech quality measures in a realistic speech dereverberation task for binaural hearing aids....

  16. A Note on the Effect of Data Clustering on the Multiple-Imputation Variance Estimator: A Theoretical Addendum to the Lewis et al. article in JOS 2014

    Directory of Open Access Journals (Sweden)

    He Yulei

    2016-03-01

    Full Text Available Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014 compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.

  17. Theoretical estimation of Photons flow rate Production in quark gluon interaction at high energies

    Science.gov (United States)

    Al-Agealy, Hadi J. M.; Hamza Hussein, Hyder; Mustafa Hussein, Saba

    2018-05-01

    photons emitted from higher energetic collisions in quark-gluon system have been theoretical studied depending on color quantum theory. A simple model for photons emission at quark-gluon system have been investigated. In this model, we use a quantum consideration which enhances to describing the quark system. The photons current rate are estimation for two system at different fugacity coefficient. We discussion the behavior of photons rate and quark gluon system properties in different photons energies with Boltzmann model. The photons rate depending on anisotropic coefficient : strong constant, photons energy, color number, fugacity parameter, thermal energy and critical energy of system are also discussed.

  18. Towards a better estimation of water power potential in Sudan

    International Nuclear Information System (INIS)

    Osman, Khalid Abd ELFattah M. and others

    1999-01-01

    This paper present the previous and recent studies for the estimation of hydropower potential of Sudan from Nilotic and non-Nilotic sources. The theoretical availability of the hydropower potential was elaborated. The paper also highlights on the technical feasibility of the potential. It is worth mentioning that, reasons of he differences between theoretical and feasible potential were discussed. A procedure for ranking the available potential is concisely presented. Furthermore, the paper presents and discusses widely the available hydropower potential for international interconnections

  19. Conceptual aspects: analyses law, ethical, human, technical, social factors of development ICT, e-learning and intercultural development in different countries setting out the previous new theoretical model and preliminary findings

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Smyrnova-Trybulska, Eugenia; Morze, Natalia; Issa, Tomayess; Issa, Theodora

    2015-01-01

    This paper, prepared by an international team of authors focuses on the conceptual aspects: analyses law, ethical, human, technical, social factors of ICT development, e-learning and intercultural development in different countries, setting out the previous and new theoretical model and preliminary

  20. A theoretical model for estimating the vacancies produced in graphene by irradiation

    International Nuclear Information System (INIS)

    Codorniu Pujals, Daniel; Aguilera Corrales, Yuri

    2011-01-01

    The award of the Nobel Prize of Physics 2010 to the scientists that isolated graphene is a clear evidence of the great interest that this system has raised among the physicists. This quasi-two-dimensional material, whose electrons behave as massless Dirac particles, presents sui generis properties that seem very promising for diverse practical applications. At the same time, the system poses new theoretical challenges for the scientists of very different branches, from Material Science to Relativistic Quantum Mechanics. A topic of great actuality in graphene researches is the search of ways to control the number and distribution of the defects in its crystal lattice, in order to achieve certain physical properties. One of these ways can be the irradiation with different kind of particles. However, the irradiation processes in two-dimensional systems have been insufficiently studied. The classic models of interaction of the radiation with solids are based on three-dimensional structures, for what they should be modified to apply them to graphene. In the present work we discuss, from the theoretical point of view, the features of the processes that happen in the two-dimensional structure of monolayer graphene under irradiation with different kinds of particles. In that context, some mathematical expressions that allow to estimate the concentration of the vacancies created during these processes are presented. We also discuss the possible use of the information obtained from the model to design structures of topological defects with certain elastic deformation fields, as well as their influence in the electronic properties. (Author)

  1. Impact of a financial risk-sharing scheme on budget-impact estimations: a game-theoretic approach.

    Science.gov (United States)

    Gavious, Arieh; Greenberg, Dan; Hammerman, Ariel; Segev, Ella

    2014-06-01

    As part of the process of updating the National List of Health Services in Israel, health plans (the 'payers') and manufacturers each provide estimates on the expected number of patients that will utilize a new drug. Currently, payers face major financial consequences when actual utilization is higher than the allocated budget. We suggest a risk-sharing model between the two stakeholders; if the actual number of patients exceeds the manufacturer's prediction, the manufacturer will reimburse the payers by a rebate rate of α from the deficit. In case of under-utilization, payers will refund the government at a rate of γ from the surplus budget. Our study objective was to identify the optimal early estimations of both 'players' prior to and after implementation of the risk-sharing scheme. Using a game-theoretic approach, in which both players' statements are considered simultaneously, we examined the impact of risk-sharing within a given range of rebate proportions, on players' early budget estimations. When increasing manufacturer's rebate α to be over 50 %, then manufacturers will announce a larger number, and health plans will announce a lower number of patients than they would without risk sharing, thus substantially decreasing the gap between their estimates. Increasing γ changes players' estimates only slightly. In reaction to applying a substantial risk-sharing rebate α on the manufacturer, both players are expected to adjust their budget estimates toward an optimal equilibrium. Increasing α is a better vehicle for reaching the desired equilibrium rather than increasing γ, as the manufacturer's rebate α substantially influences both players, whereas γ has little effect on the players behavior.

  2. Satellite telemetry reveals higher fishing mortality rates than previously estimated, suggesting overfishing of an apex marine predator.

    Science.gov (United States)

    Byrne, Michael E; Cortés, Enric; Vaudo, Jeremy J; Harvey, Guy C McN; Sampson, Mark; Wetherbee, Bradley M; Shivji, Mahmood

    2017-08-16

    Overfishing is a primary cause of population declines for many shark species of conservation concern. However, means of obtaining information on fishery interactions and mortality, necessary for the development of successful conservation strategies, are often fisheries-dependent and of questionable quality for many species of commercially exploited pelagic sharks. We used satellite telemetry as a fisheries-independent tool to document fisheries interactions, and quantify fishing mortality of the highly migratory shortfin mako shark ( Isurus oxyrinchus ) in the western North Atlantic Ocean. Forty satellite-tagged shortfin mako sharks tracked over 3 years entered the Exclusive Economic Zones of 19 countries and were harvested in fisheries of five countries, with 30% of tagged sharks harvested. Our tagging-derived estimates of instantaneous fishing mortality rates ( F = 0.19-0.56) were 10-fold higher than previous estimates from fisheries-dependent data (approx. 0.015-0.024), suggesting data used in stock assessments may considerably underestimate fishing mortality. Additionally, our estimates of F were greater than those associated with maximum sustainable yield, suggesting a state of overfishing. This information has direct application to evaluations of stock status and for effective management of populations, and thus satellite tagging studies have potential to provide more accurate estimates of fishing mortality and survival than traditional fisheries-dependent methodology. © 2017 The Author(s).

  3. Theoretical analysis of the distribution of isolated particles in totally asymmetric exclusion processes: Application to mRNA translation rate estimation

    Science.gov (United States)

    Dao Duc, Khanh; Saleem, Zain H.; Song, Yun S.

    2018-01-01

    The Totally Asymmetric Exclusion Process (TASEP) is a classical stochastic model for describing the transport of interacting particles, such as ribosomes moving along the messenger ribonucleic acid (mRNA) during translation. Although this model has been widely studied in the past, the extent of collision between particles and the average distance between a particle to its nearest neighbor have not been quantified explicitly. We provide here a theoretical analysis of such quantities via the distribution of isolated particles. In the classical form of the model in which each particle occupies only a single site, we obtain an exact analytic solution using the matrix ansatz. We then employ a refined mean-field approach to extend the analysis to a generalized TASEP with particles of an arbitrary size. Our theoretical study has direct applications in mRNA translation and the interpretation of experimental ribosome profiling data. In particular, our analysis of data from Saccharomyces cerevisiae suggests a potential bias against the detection of nearby ribosomes with a gap distance of less than approximately three codons, which leads to some ambiguity in estimating the initiation rate and protein production flux for a substantial fraction of genes. Despite such ambiguity, however, we demonstrate theoretically that the interference rate associated with collisions can be robustly estimated and show that approximately 1% of the translating ribosomes get obstructed.

  4. A theoretical framework for Ångström equation. Its virtues and liabilities in solar energy estimation

    International Nuclear Information System (INIS)

    Stefu, Nicoleta; Paulescu, Marius; Blaga, Robert; Calinoiu, Delia; Pop, Nicolina; Boata, Remus; Paulescu, Eugenia

    2016-01-01

    Highlights: • A self-consistent derivation of the Ångström equation is carried out. • The theoretical assessment on its performance is well supported by the measured data. • The variability in cloud transmittance is a major source of uncertainty for estimates. • The degradation in time and space of the empirical equations calibration is assessed. - Abstract: The relation between solar irradiation and sunshine duration was investigated from the very beginning of solar radiation measurements. Many studies were devoted to this topic aiming to include the complex influence of clouds on solar irradiation into equations. This study is focused on the linear relationship between the clear sky index and the relative sunshine proposed by the pioneering work of Ångström. A full semi-empirical derivation of the equation, highlighting its virtues and liabilities, is presented. Specific Ångström – type equations for beam and diffuse solar irradiation were derived separately. The sum of the two components recovers the traditional form of the Ångström equation. The physical meaning of the Ångström parameter, as the average of the clouds transmittance, emerges naturally. The theoretical results on the Ångström equation performance are well supported by the tests against measured data. Using long-term records of global solar irradiation and sunshine duration from thirteen European radiometric stations, the influence of the Ångström constraint (slope equals one minus intercept) on the accuracy of the estimates is analyzed. Another focus is on the assessment of the degradation of the equation calibration. The temporal variability in cloud transmittance (both long-term trend and fluctuations) is a major source of uncertainty for Ångström equation estimates.

  5. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  6. Theoretical estimation of Z´ boson mass

    International Nuclear Information System (INIS)

    Maji, Priya; Banerjee, Debika; Sahoo, Sukadev

    2016-01-01

    The discovery of Higgs boson at the LHC brings a renewed perspective in particle physics. With the help of Higgs mechanism, standard model (SM) allows the generation of particle mass. The ATLAS and CMS experiments at the LHC have predicted the mass of Higgs boson as m_H=125-126 GeV. Recently, it is claimed that the Higgs boson might interact with dark matter and there exists relation between the Higgs boson and dark matter (DM). Hertzberg has predicted a correlation between the Higgs mass and the abundance of dark matter. His theoretical result is in good agreement with current data. He has predicted the mass of Higgs boson as GeV. The Higgs boson could be coupled to the particle that constitutes all or part of the dark matter in the universe. Light Z´ boson could have important implications in dark matter phenomenology

  7. Theoretical analysis of film condensation in horizontal microfin tubes; Microfin tsuki suihei kannai gyoshuku no riron kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Honda, H; Wang, H [Kyushu University, Fukuoka (Japan). Institute of Advanced Material Study; Nozu, S [Okamaya Prefectural University, Okayama (Japan). Faculty of Computer Science and System Engineering

    2000-10-25

    A theoretical study has been made of film condensation in helically-grooved, horizontal microfin tubes. The annular flow regime and the stratified flow regime were considered. For the annular flow regime, a previously developed theoretical model was applied. For the stratified flow regime, the height of stratified condensate was estimated by a modified Taitel and Dukler model. For the upper part of the tube exposed to the vapor flow, numerical calculation of Laminar film condensation considering the combined effects of gravity and surface tension forces was conducted. The heat transfer coefficient at the lower part of the tube was estimated by an empirical equation for the internally finned tubes developed by Carnavos. The theoretical predictions of the circumferential average heat transfer coefficient by the two theoretical models were compared with available experimental data for four refrigerants and four tubes. Generally, the annular flow model gave a higher heat transfer coefficient than the stratified flow model in the high quality region, whereas the stratified flow model gave a higher heat transfer coefficient in the low quality region. For tubes with fin heights of 0.16 {approx} 0.24 mm, most of the experimental data agreed within {+-} 20% with the higher of the two theoretical predictions. (author)

  8. Air Space Proportion in Pterosaur Limb Bones Using Computed Tomography and Its Implications for Previous Estimates of Pneumaticity

    Science.gov (United States)

    Martin, Elizabeth G.; Palmer, Colin

    2014-01-01

    Air Space Proportion (ASP) is a measure of how much air is present within a bone, which allows for a quantifiable comparison of pneumaticity between specimens and species. Measured from zero to one, higher ASP means more air and less bone. Conventionally, it is estimated from measurements of the internal and external bone diameter, or by analyzing cross-sections. To date, the only pterosaur ASP study has been carried out by visual inspection of sectioned bones within matrix. Here, computed tomography (CT) scans are used to calculate ASP in a small sample of pterosaur wing bones (mainly phalanges) and to assess how the values change throughout the bone. These results show higher ASPs than previous pterosaur pneumaticity studies, and more significantly, higher ASP values in the heads of wing bones than the shaft. This suggests that pneumaticity has been underestimated previously in pterosaurs, birds, and other archosaurs when shaft cross-sections are used to estimate ASP. Furthermore, ASP in pterosaurs is higher than those found in birds and most sauropod dinosaurs, giving them among the highest ASP values of animals studied so far, supporting the view that pterosaurs were some of the most pneumatized animals to have lived. The high degree of pneumaticity found in pterosaurs is proposed to be a response to the wing bone bending stiffness requirements of flight rather than a means to reduce mass, as is often suggested. Mass reduction may be a secondary result of pneumaticity that subsequently aids flight. PMID:24817312

  9. Estimation of Resting Energy Expenditure: Validation of Previous and New Predictive Equations in Obese Children and Adolescents.

    Science.gov (United States)

    Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye

    2017-08-01

    Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.

  10. Theoretical analysis on the probability of initiating persistent fission chain

    International Nuclear Information System (INIS)

    Liu Jianjun; Wang Zhe; Zhang Ben'ai

    2005-01-01

    For the finite multiplying system of fissile material in the presence of a weak neutron source, the authors analyses problems on the probability of initiating a persistent fission chain through reckoning the stochastic theory of neutron multiplication. In the theoretical treatment, the conventional point reactor conception model is developed to an improved form with position x and velocity v dependence. The estimated results including approximate value of the probability mentioned above and its distribution are given by means of diffusion approximation and compared with those with previous point reactor conception model. They are basically consistent, however the present model can provide details on the distribution. (authors)

  11. A combined crossed molecular beams and theoretical study of the reaction CN + C2H4

    Science.gov (United States)

    Balucani, Nadia; Leonori, Francesca; Petrucci, Raffaele; Wang, Xingan; Casavecchia, Piergiorgio; Skouteris, Dimitrios; Albernaz, Alessandra F.; Gargano, Ricardo

    2015-03-01

    The CN + C2H4 reaction has been investigated experimentally, in crossed molecular beam (CMB) experiments at the collision energy of 33.4 kJ/mol, and theoretically, by electronic structure calculations of the relevant potential energy surface and Rice-Ramsperger-Kassel-Marcus (RRKM) estimates of the product branching ratio. Differently from previous CMB experiments at lower collision energies, but similarly to a high energy study, we have some indication that a second reaction channel is open at this collision energy, the characteristics of which are consistent with the channel leading to CH2CHNC + H. The RRKM estimates using M06L electronic structure calculations qualitatively support the experimental observation of C2H3NC formation at this and at the higher collision energy of 42.7 kJ/mol of previous experiments.

  12. Theoretical calculations of hardness and metallicity for multibond hexagonal 5d transition metal diborides with ReB2 structure

    International Nuclear Information System (INIS)

    Yang Jun; Gao Fa-Ming; Liu Yong-Shan

    2017-01-01

    The hardness, electronic, and elastic properties of 5d transition metal diborides with ReB 2 structure are studied theoretically by using the first principles calculations. The calculated results are in good agreement with the previous experimental and theoretical results. Empirical formulas for estimating the hardness and partial number of effective free electrons for each bond in multibond compounds with metallicity are presented. Based on the formulas, IrB 2 has the largest hardness of 21.8 GPa, followed by OsB 2 (21.0 GPa) and ReB 2 (19.7 GPa), indicating that they are good candidates as hard materials. (paper)

  13. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  14. The application of mean field theory to image motion estimation.

    Science.gov (United States)

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  15. Exploring super-gaussianity towards robust information-theoretical time delay estimation

    DEFF Research Database (Denmark)

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos

    2013-01-01

    the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced...

  16. A combined crossed molecular beams and theoretical study of the reaction CN + C2H4

    International Nuclear Information System (INIS)

    Balucani, Nadia; Leonori, Francesca; Petrucci, Raffaele; Wang, Xingan; Casavecchia, Piergiorgio; Skouteris, Dimitrios; Albernaz, Alessandra F.; Gargano, Ricardo

    2015-01-01

    Highlights: • The CN + C 2 H 4 reaction was investigated in crossed beam experiments. • Electronic structure calculations of the potential energy surface were performed. • RRKM estimates qualitatively reproduce the experimental C 2 H 3 NC yield. - Abstract: The CN + C 2 H 4 reaction has been investigated experimentally, in crossed molecular beam (CMB) experiments at the collision energy of 33.4 kJ/mol, and theoretically, by electronic structure calculations of the relevant potential energy surface and Rice–Ramsperger–Kassel–Marcus (RRKM) estimates of the product branching ratio. Differently from previous CMB experiments at lower collision energies, but similarly to a high energy study, we have some indication that a second reaction channel is open at this collision energy, the characteristics of which are consistent with the channel leading to CH 2 CHNC + H. The RRKM estimates using M06L electronic structure calculations qualitatively support the experimental observation of C 2 H 3 NC formation at this and at the higher collision energy of 42.7 kJ/mol of previous experiments

  17. A theoretical approach to the problem of dose-volume constraint estimation and their impact on the dose-volume histogram selection

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Stavrev, Pavel; Stavreva, Nadia; Fallone, B. Gino

    2006-01-01

    This paper outlines a theoretical approach to the problem of estimating and choosing dose-volume constraints. Following this approach, a method of choosing dose-volume constraints based on biological criteria is proposed. This method is called ''reverse normal tissue complication probability (NTCP) mapping into dose-volume space'' and may be used as a general guidance to the problem of dose-volume constraint estimation. Dose-volume histograms (DVHs) are randomly simulated, and those resulting in clinically acceptable levels of complication, such as NTCP of 5±0.5%, are selected and averaged producing a mean DVH that is proven to result in the same level of NTCP. The points from the averaged DVH are proposed to serve as physical dose-volume constraints. The population-based critical volume and Lyman NTCP models with parameter sets taken from literature sources were used for the NTCP estimation. The impact of the prescribed value of the maximum dose to the organ, D max , on the averaged DVH and the dose-volume constraint points is investigated. Constraint points for 16 organs are calculated. The impact of the number of constraints to be fulfilled based on the likelihood that a DVH satisfying them will result in an acceptable NTCP is also investigated. It is theoretically proven that the radiation treatment optimization based on physical objective functions can sufficiently well restrict the dose to the organs at risk, resulting in sufficiently low NTCP values through the employment of several appropriate dose-volume constraints. At the same time, the pure physical approach to optimization is self-restrictive due to the preassignment of acceptable NTCP levels thus excluding possible better solutions to the problem

  18. Adaptive optimisation-offline cyber attack on remote state estimator

    Science.gov (United States)

    Huang, Xin; Dong, Jiuxiang

    2017-10-01

    Security issues of cyber-physical systems have received increasing attentions in recent years. In this paper, deception attacks on the remote state estimator equipped with the chi-squared failure detector are considered, and it is assumed that the attacker can monitor and modify all the sensor data. A novel adaptive optimisation-offline cyber attack strategy is proposed, where using the current and previous sensor data, the attack can yield the largest estimation error covariance while ensuring to be undetected by the chi-squared monitor. From the attacker's perspective, the attack is better than the existing linear deception attacks to degrade the system performance. Finally, some numerical examples are provided to demonstrate theoretical results.

  19. Theoretical study of evaporation heat transfer in horizontal microfin tubes: stratified flow model

    Energy Technology Data Exchange (ETDEWEB)

    Honda, H; Wang, Y S [Kyushu Univ., Inst. for Materials Chemistry and Engineering, Kasuga, Fukuoka (Japan)

    2004-08-01

    The stratified flow model of evaporation heat transfer in helically grooved, horizontal microfin tubes has been developed. The profile of stratified liquid was determined by a theoretical model previously developed for condensation in horizontal microfin tubes. For the region above the stratified liquid, the meniscus profile in the groove between adjacent fins was determined by a force balance between the gravity and surface tension forces. The thin film evaporation model was applied to predict heat transfer in the thin film region of the meniscus. Heat transfer through the stratified liquid was estimated by using an empirical correlation proposed by Mori et al. The theoretical predictions of the circumferential average heat transfer coefficient were compared with available experimental data for four tubes and three refrigerants. A good agreement was obtained for the region of Fr{sub 0}<2.5 as long as partial dry out of tube surface did not occur. (Author)

  20. [Estimating child mortality using the previous child technique, with data from health centers and household surveys: methodological aspects].

    Science.gov (United States)

    Aguirre, A; Hill, A G

    1988-01-01

    2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey

  1. Wildlife Loss Estimates and Summary of Previous Mitigation Related to Hydroelectric Projects in Montana, Volume Three, Hungry Horse Project.

    Energy Technology Data Exchange (ETDEWEB)

    Casey, Daniel

    1984-10-01

    This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.

  2. Theoretical Study of Penalized-Likelihood Image Reconstruction for Region of Interest Quantification

    International Nuclear Information System (INIS)

    Qi, Jinyi; Huesman, Ronald H.

    2006-01-01

    Region of interest (ROI) quantification is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Statistical image reconstruction methods based on the penalized maximum-likelihood (PML) or maximum a posteriori principle have been developed for emission tomography to deal with the low signal-to-noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the regularization parameter in PML reconstruction controls the resolution and noise tradeoff and, hence, affects ROI quantification. In this paper, we theoretically analyze the performance of ROI quantification in PML reconstructions. Building on previous work, we derive simplified theoretical expressions for the bias, variance, and ensemble mean-squared-error (EMSE) of the estimated total activity in an ROI that is surrounded by a uniform background. When the mean and covariance matrix of the activity inside the ROI are known, the theoretical expressions are readily computable and allow for fast evaluation of image quality for ROI quantification with different regularization parameters. The optimum regularization parameter can then be selected to minimize the EMSE. Computer simulations are conducted for small ROIs with variable uniform uptake. The results show that the theoretical predictions match the Monte Carlo results reasonably well

  3. Theoretical analysis of balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2012-01-01

    In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....

  4. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  5. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    Science.gov (United States)

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  6. Theoretical vs. measured risk estimates for the external exposure to ionizing radiation pathway - a case study of a major industrial site

    International Nuclear Information System (INIS)

    Dundon, S.T.

    1996-01-01

    Two methods of estimating the risk to industrial receptors to ionizing radiation are presented here. The first method relies on the use of the U.S. Environmental Protection Agency (EPA) external exposure slope factor combined with default exposure parameters for industrial land uses. The second method employs measured exposure rate date and site-specific exposure durations combined with the BEIR V radiological risk coefficient to estimate occupational risk. The uncertainties in each method are described qualitatively. Site-specific information was available for the exposure duration and the exposure frequency as well as historic dosimetry information. Risk estimates were also generated for the current regulatory cleanup level (removal risks included) and for a no action scenario. The study showed that uncertainties for risks calculated using measured exposure rates and site-specific exposure parameters were much lower and defendable than using EPA slope factors combined with default exposure parameters. The findings call into question the use of a uniform cleanup standard for depleted uranium that does not account for site-specific land uses and relies on theoretical models rather than measured exposure rate information

  7. A game-theoretic framework for estimating a health purchaser's willingness-to-pay for health and for expansion.

    Science.gov (United States)

    Yaesoubi, Reza; Roberts, Stephen D

    2010-12-01

    A health purchaser's willingness-to-pay (WTP) for health is defined as the amount of money the health purchaser (e.g. a health maximizing public agency or a profit maximizing health insurer) is willing to spend for an additional unit of health. In this paper, we propose a game-theoretic framework for estimating a health purchaser's WTP for health in markets where the health purchaser offers a menu of medical interventions, and each individual in the population selects the intervention that maximizes her prospect. We discuss how the WTP for health can be employed to determine medical guidelines, and to price new medical technologies, such that the health purchaser is willing to implement them. The framework further introduces a measure for WTP for expansion, defined as the amount of money the health purchaser is willing to pay per person in the population served by the health provider to increase the consumption level of the intervention by one percent without changing the intervention price. This measure can be employed to find how much to invest in expanding a medical program through opening new facilities, advertising, etc. Applying the proposed framework to colorectal cancer screening tests, we estimate the WTP for health and the WTP for expansion of colorectal cancer screening tests for the 2005 US population.

  8. Methodological Framework for Estimating the Correlation Dimension in HRV Signals

    Directory of Open Access Journals (Sweden)

    Juan Bolea

    2014-01-01

    Full Text Available This paper presents a methodological framework for robust estimation of the correlation dimension in HRV signals. It includes (i a fast algorithm for on-line computation of correlation sums; (ii log-log curves fitting to a sigmoidal function for robust maximum slope estimation discarding the estimation according to fitting requirements; (iii three different approaches for linear region slope estimation based on latter point; and (iv exponential fitting for robust estimation of saturation level of slope series with increasing embedded dimension to finally obtain the correlation dimension estimate. Each approach for slope estimation leads to a correlation dimension estimate, called D^2, D^2⊥, and D^2max. D^2 and D^2max estimate the theoretical value of correlation dimension for the Lorenz attractor with relative error of 4%, and D^2⊥ with 1%. The three approaches are applied to HRV signals of pregnant women before spinal anesthesia for cesarean delivery in order to identify patients at risk for hypotension. D^2 keeps the 81% of accuracy previously described in the literature while D^2⊥ and D^2max approaches reach 91% of accuracy in the same database.

  9. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    Science.gov (United States)

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  10. Concept of the Cooling System of the ITS for ALICE: Technical Proposals, Theoretical Estimates, Experimental Results

    CERN Document Server

    Godisov, O N; Yudkin, M I; Gerasimov, S F; Feofilov, G A

    1994-01-01

    Contradictory demands raised by the application of different types of sensitive detectors in 5 layers of the Inner Tracking System (ITS) for ALICE stipulate the simultaneous use of different schemes of heat drain: gaseous cooling of the 1st layer (uniform heat production over the sensitive surface) and evaporative cooling for the 2nd-5th layers (localised heat production). The last system is also a must for the thermostabilization of Si-drift detectors within 0.1 degree C. Theoretical estimates of gaseous, evaporative and liquid cooling systems are done for all ITS layers. The results of the experiments done for evaporative and liquid heat drain systems are presented and discussed. The major technical problems of the evaporative systems' design are being considered: i) control of liquid supply; ii) vapour pressure control. Two concepts of the evaporative systems are proposed: 1) One channel system for joint transfer of two phases (liquid + gas); 2) Two channels system with separate transfer of phases. Both sy...

  11. Theoretical estimation of proton induced X-ray emission yield of the trace elements present in the lung and breast cancer

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Sowmya, N.

    2013-01-01

    X-rays may be produced following the excitation of target atoms induced by an energetic incident ion beam of protons. Proton induced X-ray emission (PIXE) analysis has been used for many years for the determination of elemental composition of materials using X-rays. Recent interest in the proton induced X-ray emission cross section has arisen due to their importance in the rapidly expanding field of PIXE analysis. One of the steps in the analysis is to fit the measured X-ray spectrum with theoretical spectrum. The theoretical cross section and yields are essential for the evaluation of spectrum. We have theoretically evaluated the PIXE cross sections for trace elements in the lung and breast cancer tissues such as Cl, K, Ca,Ti, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Rb, P, S, Sr, Hg and Pb. The estimated cross section is used in the evaluation of Proton induced X-ray emission spectrum for the given trace elements.We have also evaluated the Proton induced X-ray emission yields in the thin and thick target of the given trace elements. The evaluated Proton induced X-ray emission cross-section, spectrum and yields are graphically represented. Some of these values are also tabulated. Proton induced X-ray emission cross sections and a yield for the given trace elements varies with the energy. PIXE yield depends on a real density and does not on thickness of the target. (author)

  12. Uncertainty Estimates: A New Editorial Standard

    International Nuclear Information System (INIS)

    Drake, Gordon W.F.

    2014-01-01

    Full text: The objective of achieving higher standards for uncertainty estimates in the publication of theoretical data for atoms and molecules requires a concerted effort by both the authors of papers and the editors who send them out for peer review. In April, 2011, the editors of Physical Review A published an Editorial announcing a new standard that uncertainty estimates would be required whenever practicable, and in particular in the following circumstances: 1. If the authors claim high accuracy, or improvements on the accuracy of previous work. 2. If the primary motivation for the paper is to make comparisons with present or future high precision experimental measurements. 3. If the primary motivation is to provide interpolations or extrapolations of known experimental measurements. The new policy means that papers that do not meet these standards are not sent out for peer review until they have been suitably revised, and the authors are so notified immediately upon receipt. The policy has now been in effect for three years. (author

  13. An observer-theoretic approach to estimating neutron flux distribution

    International Nuclear Information System (INIS)

    Park, Young Ho; Cho, Nam Zin

    1989-01-01

    State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors

  14. MAGNETIC QUENCHING OF TURBULENT DIFFUSIVITY: RECONCILING MIXING-LENGTH THEORY ESTIMATES WITH KINEMATIC DYNAMO MODELS OF THE SOLAR CYCLE

    International Nuclear Information System (INIS)

    Munoz-Jaramillo, Andres; Martens, Petrus C. H.; Nandy, Dibyendu

    2011-01-01

    The turbulent magnetic diffusivity in the solar convection zone is one of the most poorly constrained ingredients of mean-field dynamo models. This lack of constraint has previously led to controversy regarding the most appropriate set of parameters, as different assumptions on the value of turbulent diffusivity lead to radically different solar cycle predictions. Typically, the dynamo community uses double-step diffusivity profiles characterized by low values of diffusivity in the bulk of the convection zone. However, these low diffusivity values are not consistent with theoretical estimates based on mixing-length theory, which suggest much higher values for turbulent diffusivity. To make matters worse, kinematic dynamo simulations cannot yield sustainable magnetic cycles using these theoretical estimates. In this work, we show that magnetic cycles become viable if we combine the theoretically estimated diffusivity profile with magnetic quenching of the diffusivity. Furthermore, we find that the main features of this solution can be reproduced by a dynamo simulation using a prescribed (kinematic) diffusivity profile that is based on the spatiotemporal geometric average of the dynamically quenched diffusivity. This bridges the gap between dynamically quenched and kinematic dynamo models, supporting their usage as viable tools for understanding the solar magnetic cycle.

  15. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    International Nuclear Information System (INIS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-01-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment. (paper)

  16. A combined crossed molecular beams and theoretical study of the reaction CN + C{sub 2}H{sub 4}

    Energy Technology Data Exchange (ETDEWEB)

    Balucani, Nadia, E-mail: nadia.balucani@unipg.it [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Leonori, Francesca; Petrucci, Raffaele [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Wang, Xingan [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Casavecchia, Piergiorgio [Dipartimento di Chimica, Biologia e Biotecnologie, Università degli Studi di Perugia, Perugia (Italy); Skouteris, Dimitrios [Scuola Normale Superiore, Pisa (Italy); Albernaz, Alessandra F. [Instituto de Física, Universidade de Brasília, Brasília (Brazil); Gargano, Ricardo [Instituto de Física, Universidade de Brasília, Brasília (Brazil); Departments of Chemistry and Physics, University of Florida, Quantum Theory Project, Gainesville, FL 32611 (United States)

    2015-03-01

    Highlights: • The CN + C{sub 2}H{sub 4} reaction was investigated in crossed beam experiments. • Electronic structure calculations of the potential energy surface were performed. • RRKM estimates qualitatively reproduce the experimental C{sub 2}H{sub 3}NC yield. - Abstract: The CN + C{sub 2}H{sub 4} reaction has been investigated experimentally, in crossed molecular beam (CMB) experiments at the collision energy of 33.4 kJ/mol, and theoretically, by electronic structure calculations of the relevant potential energy surface and Rice–Ramsperger–Kassel–Marcus (RRKM) estimates of the product branching ratio. Differently from previous CMB experiments at lower collision energies, but similarly to a high energy study, we have some indication that a second reaction channel is open at this collision energy, the characteristics of which are consistent with the channel leading to CH{sub 2}CHNC + H. The RRKM estimates using M06L electronic structure calculations qualitatively support the experimental observation of C{sub 2}H{sub 3}NC formation at this and at the higher collision energy of 42.7 kJ/mol of previous experiments.

  17. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  18. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  19. Dimensional accuracy of ceramic self-ligating brackets and estimates of theoretical torsional play.

    Science.gov (United States)

    Lee, Youngran; Lee, Dong-Yul; Kim, Yoon-Ji R

    2016-09-01

    To ascertain the dimensional accuracies of some commonly used ceramic self-ligation brackets and the amount of torsional play in various bracket-archwire combinations. Four types of 0.022-inch slot ceramic self-ligating brackets (upper right central incisor), three types of 0.018-inch ceramic self-ligating brackets (upper right central incisor), and three types of rectangular archwires (0.016 × 0.022-inch beta-titanium [TMA] (Ormco, Orange, Calif), 0.016 × 0.022-inch stainless steel [SS] (Ortho Technology, Tampa, Fla), and 0.019 × 0.025-inch SS (Ortho Technology)) were measured using a stereomicroscope to determine slot widths and wire cross-sectional dimensions. The mean acquired dimensions of the brackets and wires were applied to an equation devised by Meling to estimate torsional play angle (γ). In all bracket systems, the slot tops were significantly wider than the slot bases (P brackets, and Clippy-Cs (Tomy, Futaba, Fukushima, Japan) among the 0.018-inch brackets. The Damon Clear (Ormco) bracket had the smallest dimensional error (0.542%), whereas the 0.022-inch Empower Clear (American Orthodontics, Sheboygan, Wis) bracket had the largest (3.585%). The largest amount of theoretical play is observed using the Empower Clear (American Orthodontics) 0.022-inch bracket combined with the 0.016 × 0.022-inch TMA wire (Ormco), whereas the least amount occurs using the 0.018 Clippy-C (Tomy) combined with 0.016 × 0.022-inch SS wire (Ortho Technology).

  20. Annual Gross Primary Production from Vegetation Indices: A Theoretically Sound Approach

    Directory of Open Access Journals (Sweden)

    María Amparo Gilabert

    2017-02-01

    Full Text Available A linear relationship between the annual gross primary production (GPP and a PAR-weighted vegetation index is theoretically derived from the Monteith equation. A semi-empirical model is then proposed to estimate the annual GPP from commonly available vegetation indices images and a representative PAR, which does not require actual meteorological data. A cross validation procedure is used to calibrate and validate the model predictions against reference data. As the calibration/validation process depends on the reference GPP product, the higher the quality of the reference GPP, the better the performance of the semi-empirical model. The annual GPP has been estimated at 1-km scale from MODIS NDVI and EVI images for eight years. Two reference data sets have been used: an optimized GPP product for the study area previously obtained and the MOD17A3 product. Different statistics show a good agreement between the estimates and the reference GPP data, with correlation coefficient around 0.9 and relative RMSE around 20%. The annual GPP is overestimated in semiarid areas and slightly underestimated in dense forest areas. With the above limitations, the model provides an excellent compromise between simplicity and accuracy for the calculation of long time series of annual GPP.

  1. Uncertainty estimates for theoretical atomic and molecular data

    International Nuclear Information System (INIS)

    Chung, H-K; Braams, B J; Bartschat, K; Császár, A G; Drake, G W F; Kirchner, T; Kokoouline, V; Tennyson, J

    2016-01-01

    Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structures and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering. (topical review)

  2. Devil in the Details: A Critical Review of "Theoretical Loss".

    Science.gov (United States)

    Tom, Matthew A; Shaffer, Howard J

    2016-09-01

    In their review of Internet gambling studies, Auer and Griffiths (J Gambl Stud 30(4), 879-887, 2014) question the validity of using bet size as an indicator of gambling intensity. Instead, in that review and in a response (Auer and Griffiths, J Gambl Stud 31(3), 921-931, 2015) to a previous comment (Braverman et al., J Gambl Stud 31(2), 359-366, 2015), Auer and Griffiths suggested using "theoretical loss" as a preferable measure of gambling intensity. This comment extends and advances the discussion about measures of gambling intensity. In this paper, we describe previously identified problems that Auer and Griffiths need to address to sustain theoretical loss as a viable measure of gambling intensity and add details to the discussion that demonstrate difficulties associated with the use of theoretical loss with certain gambling games.

  3. Theoretical models for recombination in expanding gas

    International Nuclear Information System (INIS)

    Avron, Y.; Kahane, S.

    1978-09-01

    In laser isotope separation of atomic uranium, one is confronted with the theoretical problem of estimating the concentration of thermally ionized uranium atoms. To investigate this problem theoretical models for recombination in an expanding gas and in the absence of local thermal equilibrium have been constructed. The expansion of the gas is described by soluble models of the hydrodynamic equation, and the recombination by rate equations. General results for the freezing effect for the suitable ranges of the gas parameters are obtained. The impossibility of thermal equilibrium in expanding two-component systems is proven

  4. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Bomble, Yannick J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); St. John, Peter C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Crowley, Michael F [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  5. Fear of cancer recurrence: a theoretical review and novel cognitive processing formulation

    NARCIS (Netherlands)

    Fardell, J.E.; Thewes, B.; Turner, J.; Gilchrist, J.; Sharpe, L.; Smith, A.; Girgis, A.; Butow, P.

    2016-01-01

    PURPOSE: Fear of cancer recurrence (FCR) is prevalent among survivors. However, a comprehensive and universally accepted theoretical framework of FCR to guide intervention is lacking. This paper reviews theoretical frameworks previously used to explain FCR and describes the formulation of a novel

  6. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  7. Theoretical estimation of "6"4Cu production with neutrons emitted during "1"8F production with a 30 MeV medical cyclotron

    International Nuclear Information System (INIS)

    Auditore, Lucrezia; Amato, Ernesto; Baldari, Sergio

    2017-01-01

    Purpose: This work presents the theoretical estimation of a combined production of "1"8F and "6"4Cu isotopes for PET applications. "6"4Cu production is induced in a secondary target by neutrons emitted during a routine "1"8F production with a 30 MeV cyclotron: protons are used to produce "1"8F by means of the "1"8O(p,n)"1"8F reaction on a ["1"8O]-H_2O target (primary target) and the emitted neutrons are used to produce "6"4Cu by means of the "6"4Zn(n,p)"6"4Cu reaction on enriched zinc target (secondary target). Methods: Monte Carlo simulations were carried out using Monte Carlo N Particle eXtended (MCNPX) code to evaluate flux and energy spectra of neutrons produced in the primary (Be+["1"8O]-H_2O) target by protons and the attenuation of neutron flux in the secondary target. "6"4Cu yield was estimated using an analytical approach based on both TENDL-2015 data library and experimental data selected from EXFOR database. Results: Theoretical evaluations indicate that about 3.8 MBq/μA of "6"4Cu can be obtained as a secondary, ‘side’ production with a 30 MeV cyclotron, for 2 h of irradiation of a proper designed zinc target. Irradiating for 2 h with a proton current of 120 μA, a yield of about 457 MBq is expected. Moreover, the most relevant contaminants result to be "6"3","6"5Zn, which can be chemically separated from "6"4Cu contrarily to what happens with proton irradiation of an enriched "6"4Ni target, which provides "6"4Cu mixed to other copper isotopes as contaminants. Conclusions: The theoretical study discussed in this paper evaluates the potential of the combined production of "1"8F and "6"4Cu for medical purposes, irradiating a properly designed target with 30 MeV protons. Interesting yields of "6"4Cu are obtainable and the estimation of contaminants in the irradiated zinc target is discussed. - Highlights: • "6"4Cu production with secondary neutrons from "1"8F production with protons was investigated. • Neutron reactions induced in enriched "6"4Zn

  8. A mechanical model of wing and theoretical estimate of taper factor ...

    Indian Academy of Sciences (India)

    Likewise, by using the data linear regression and curve estimation method, as well as estimating the taper factors and the angle between the humerus and the body, we calculated the relationship between wingspan, wing area and the speed necessary to meet the aerodynamic requirements of sustained flight. In addition ...

  9. Intelligence, previous convictions and interrogative suggestibility: a path analysis of alleged false-confession cases.

    Science.gov (United States)

    Sharrock, R; Gudjonsson, G H

    1993-05-01

    The main purpose of this study was to investigate the relationship between interrogative suggestibility and previous convictions among 108 defendants in criminal trials, using a path analysis technique. It was hypothesized that previous convictions, which may provide defendants with interrogative experiences, would correlate negatively with 'shift' as measured by the Gudjonsson Suggestibility Scale (Gudjonsson, 1984a), after intelligence and memory had been controlled for. The hypothesis was partially confirmed and the theoretical and practical implications of the findings are discussed.

  10. Theoretical Study of the Compound Parabolic Trough Solar Collector

    OpenAIRE

    Dr. Subhi S. Mahammed; Dr. Hameed J. Khalaf; Tadahmun A. Yassen

    2012-01-01

    Theoretical design of compound parabolic trough solar collector (CPC) without tracking is presented in this work. The thermal efficiency is obtained by using FORTRAN 90 program. The thermal efficiency is between (60-67)% at mass flow rate between (0.02-0.03) kg/s at concentration ratio of (3.8) without need to tracking system.The total and diffused radiation is calculated for Tikrit city by using theoretical equations. Good agreement between present work and the previous work.

  11. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  12. Review of theoretical results

    International Nuclear Information System (INIS)

    Barrett, R.C.

    1979-01-01

    Nowadays the 'experimental' charge densities are produced with convincing error estimates due to new methods and techniques. In addition the accuracy of those experiments means that r.m.s. radii are known within a few hundredths of a fermi. Because of that accuracy the theorists are left far behind. In order to show which theoretical possiblities exist at the moment we will discuss the single particle shell model and the Hartree-Fock or mean field approximation. Corrections to the mean field approximation are described. Finally, some examples and conclusions are presented. (KBE)

  13. Study of some physical aspects previous to design of an exponential experiment

    International Nuclear Information System (INIS)

    Caro, R.; Francisco, J. L. de

    1961-01-01

    This report presents the theoretical study of some physical aspects previous to the design of an exponential facility. The are: Fast and slow flux distribution in the multiplicative medium and in the thermal column, slowing down in the thermal column, geometrical distribution and minimum needed intensity of sources access channels and perturbations produced by possible variations in its position and intensity. (Author) 4 refs

  14. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

  15. Theoretical Study of the Compound Parabolic Trough Solar Collector

    Directory of Open Access Journals (Sweden)

    Dr. Subhi S. Mahammed

    2012-06-01

    Full Text Available Theoretical design of compound parabolic trough solar collector (CPC without tracking is presented in this work. The thermal efficiency is obtained by using FORTRAN 90 program. The thermal efficiency is between (60-67% at mass flow rate between (0.02-0.03 kg/s at concentration ratio of (3.8 without need to tracking system.The total and diffused radiation is calculated for Tikrit city by using theoretical equations. Good agreement between present work and the previous work.

  16. Bioactivity of Isoflavones: Assessment through a Theoretical Model as a Way to Obtain a “Theoretical Efficacy Related to Estradiol (TERE)”

    Science.gov (United States)

    Campos, Maria da Graça R.; Matos, Miguel Pires

    2010-01-01

    The increase of human life span will have profound implications in Public Health in decades to come. By 2030, there will be an estimated 1.2 billion women in post-menopause. Hormone Replacement Therapy with synthetic hormones is still full of risks and according to the latest developments, should be used for the shortest time possible. Searching for alternative drugs is inevitable in this scenario and science must provide physicians with other substances that can be used to treat the same symptoms with less side effects. Systematic research carried out on this field of study is focusing now on isoflavones but the randomised controlled trials and reviews of meta-analysis concerning post-menopause therapy, that could have an important impact on human health, are very controversial. The aim of the present work was to establish a theoretical calculation suitable for use as a way to estimate the “Theoretical Efficacy (TE)” of a mixture with different bioactive compounds as a way to obtain a “Theoretical Efficacy Related to Estradiol (TERE)”. The theoretical calculation that we propose in this paper integrates different knowledge about this subject and sets methodological boundaries that can be used to analyse already published data. The outcome should set some consensus for new clinical trials using isoflavones (isolated or included in mixtures) that will be evaluated to assess their therapeutically activity. This theoretical method for evaluation of a possible efficacy could probably also be applied to other herbal drug extracts when a synergistic or contradictory bio-effect is not verified. In this way, it we may contribute to enlighten and to the development of new therapeutic approaches. PMID:20386649

  17. Theoretical investigation of phase-controlled bias effect in capacitively coupled plasma discharges

    International Nuclear Information System (INIS)

    Kwon, Deuk-Chul; Yoon, Jung-Sik

    2011-01-01

    We theoretically investigated the effect of phase difference between powered electrodes in capacitively coupled plasma (CCP) discharges. Previous experimental result has shown that the plasma potential could be controlled by using a phase-shift controller in CCP discharges. In this work, based on the previously developed radio frequency sheath models, we developed a circuit model to self-consistently determine the bias voltage from the plasma parameters. Results show that the present theoretical model explains the experimental results quite well and there is an optimum value of the phase difference for which the V dc /V pp ratio becomes a minimum.

  18. Response to health insurance by previously uninsured rural children.

    Science.gov (United States)

    Tilford, J M; Robbins, J M; Shema, S J; Farmer, F L

    1999-08-01

    To examine the healthcare utilization and costs of previously uninsured rural children. Four years of claims data from a school-based health insurance program located in the Mississippi Delta. All children who were not Medicaid-eligible or were uninsured, were eligible for limited benefits under the program. The 1987 National Medical Expenditure Survey (NMES) was used to compare utilization of services. The study represents a natural experiment in the provision of insurance benefits to a previously uninsured population. Premiums for the claims cost were set with little or no information on expected use of services. Claims from the insurer were used to form a panel data set. Mixed model logistic and linear regressions were estimated to determine the response to insurance for several categories of health services. The use of services increased over time and approached the level of utilization in the NMES. Conditional medical expenditures also increased over time. Actuarial estimates of claims cost greatly exceeded actual claims cost. The provision of a limited medical, dental, and optical benefit package cost approximately $20-$24 per member per month in claims paid. An important uncertainty in providing health insurance to previously uninsured populations is whether a pent-up demand exists for health services. Evidence of a pent-up demand for medical services was not supported in this study of rural school-age children. States considering partnerships with private insurers to implement the State Children's Health Insurance Program could lower premium costs by assembling basic data on previously uninsured children.

  19. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    Science.gov (United States)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  20. Dynamics in Higher Education Politics: A Theoretical Model

    Science.gov (United States)

    Kauko, Jaakko

    2013-01-01

    This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…

  1. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    Science.gov (United States)

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  2. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  3. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  4. A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data

    Science.gov (United States)

    Barnes, J. R.

    1993-01-01

    Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.

  5. Estimated radiation doses to different organs among patients treated for ankylosing spondylitis with a single course of X rays

    International Nuclear Information System (INIS)

    Lewis, C.A.; Smith, P.G.; Stratton, I.M.; Darby, S.C.; Doll, R.

    1988-01-01

    A follow-up study of over 14000 patients treated with a single course of X rays for ankylosing spondylitis demonstrated substantial excess risk of developing cancer. Previously the excess risk of leukaemia has been related to the estimated mean radiation dose to active bone marrow but detailed estimates were not made of the radiation doses to other organs. Data extracted from the original treatment records of a random sample of one in 15 patients have been used to make dose estimates, using Monte Carlo methods, for 30 specific organs or body regions and 12 bone marrow sites. Estimates of mean and median organ doses, standard deviations and ranges have been tabulated. Detailed distributions are presented for six organs (lung, bronchi, stomach, oesophagus, active bone marrow and total body). Comparison with the earlier bone marrow estimates and more recent theoretical estimates shows good agreement. (author)

  6. 3rd Joint Dutch-Brazil School on Theoretical Physics

    CERN Document Server

    2015-01-01

    The Joint Dutch-Brazil School on Theoretical Physics is now in its third edition with previous schools in 2007 and 2011. This edition of the school will feature minicourses by Nima Arkani-Hamed (IAS Princeton), Jan de Boer (University of Amsterdam) and Cumrun Vafa (Harvard University), as well as student presentations. The school is jointly organized with the Dutch Research School of Theoretical Physics (DRSTP) and is intended for graduate students and researchers in the field of high-energy theoretical physics. There is no registration fee and limited funds are available for local and travel support of participants. This school in São Paulo will be preceded by the XVIII J. A. Swieca School in Campos de Jordão.

  7. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  8. Dramaturgical and Music-Theoretical Approaches to Improvisation Pedagogy

    Science.gov (United States)

    Huovinen, Erkki; Tenkanen, Atte; Kuusinen, Vesa-Pekka

    2011-01-01

    The aim of this article is to assess the relative merits of two approaches to teaching musical improvisation: a music-theoretical approach, focusing on chords and scales, and a "dramaturgical" one, emphasizing questions of balance, variation and tension. Adult students of music pedagogy, with limited previous experience in improvisation,…

  9. Qualitative methods in theoretical physics

    CERN Document Server

    Maslov, Dmitrii

    2018-01-01

    This book comprises a set of tools which allow researchers and students to arrive at a qualitatively correct answer without undertaking lengthy calculations. In general, Qualitative Methods in Theoretical Physics is about combining approximate mathematical methods with fundamental principles of physics: conservation laws and symmetries. Readers will learn how to simplify problems, how to estimate results, and how to apply symmetry arguments and conduct dimensional analysis. A comprehensive problem set is included. The book will appeal to a wide range of students and researchers.

  10. Poisson sampling - The adjusted and unadjusted estimator revisited

    Science.gov (United States)

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  11. Bootstrap consistency for general semiparametric M-estimation

    KAUST Repository

    Cheng, Guang

    2010-10-01

    Consider M-estimation in a semiparametric model that is characterized by a Euclidean parameter of interest and an infinite-dimensional nuisance parameter. As a general purpose approach to statistical inferences, the bootstrap has found wide applications in semiparametric M-estimation and, because of its simplicity, provides an attractive alternative to the inference approach based on the asymptotic distribution theory. The purpose of this paper is to provide theoretical justifications for the use of bootstrap as a semiparametric inferential tool. We show that, under general conditions, the bootstrap is asymptotically consistent in estimating the distribution of the M-estimate of Euclidean parameter; that is, the bootstrap distribution asymptotically imitates the distribution of the M-estimate. We also show that the bootstrap confidence set has the asymptotically correct coverage probability. These general onclusions hold, in particular, when the nuisance parameter is not estimable at root-n rate, and apply to a broad class of bootstrap methods with exchangeable ootstrap weights. This paper provides a first general theoretical study of the bootstrap in semiparametric models. © Institute of Mathematical Statistics, 2010.

  12. Research in theoretical nuclear physics

    International Nuclear Information System (INIS)

    Udagawa, T.

    1993-11-01

    This report describes the accomplishments in basic research in nuclear physics carried out by the theoretical nuclear physics group in the Department of Physics at the University of Texas at Austin, during the period of November 1, 1992 to October 31, 1993. The work done covers three separate areas, low-energy nuclear reactions, intermediate energy physics, and nuclear structure studies. Although the subjects are thus spread among different areas, they are based on two techniques developed in previous years. These techniques are a powerful method for continuum-random-phase-approximation (CRPA) calculations of nuclear response and the breakup-fusion (BF) approach to incomplete fusion reactions, which calculation on a single footing of various incomplete fusion reaction cross sections within the framework of direct reaction theories. The approach was developed as a part of a more general program for establishing an approach to describing all different types of nuclear reactions, i.e., complete fusion, incomplete fusion and direct reactions, in a systematic way based on single theoretical framework

  13. Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases.

    Science.gov (United States)

    Pezzè, Luca; Ciampini, Mario A; Spagnolo, Nicolò; Humphreys, Peter C; Datta, Animesh; Walmsley, Ian A; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto

    2017-09-29

    A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.

  14. Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases

    Science.gov (United States)

    Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto

    2017-09-01

    A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.

  15. Theoretical and experimental studies on the daily accumulative heat gain from cool roofs

    International Nuclear Information System (INIS)

    Qin, Yinghong; Zhang, Mingyi; Hiller, Jacob E.

    2017-01-01

    Cool roofs are gaining popularity as passive building cooling techniques, but the correlation between energy savings and rooftop albedo has not been understood completely. Here we theoretically model the daily accumulative inward heat (DAIH) from building roofs with different albedo values, correlating the heat gain of the building roof to both the rooftop albedo and the incident solar radiation. According to this model, the DAIH increases linearly with the daily zenith solar radiation, but decreases linearly with the rooftop albedo. A small building cell was constructed to monitor the heat gain of the building under the conditions of non-insulated and insulated roofs. The observational DAIH is highly coincident with the theoretical one, validating the theoretical model. It was found that insulating the roof, increasing the rooftop albedo, or both options can effectively curtail the heat gain in buildings during the summer season. The proposed theoretical model would be a powerful tool for evaluating the heat gain of the buildings and estimating the energy savings potential of high-reflective cool roofs. - Highlights: • Daily accumulative heat gain from a building roof is theoretically modeled. • Daily accumulative heat gain from a building roof increases linearly with rooftop absorptivity. • Increasing the roof insulation tapers the effect of the rooftop absorptivity. • The theoretical model is powerful for estimating energy savings of reflective roofs.

  16. [Estimating non work-related sickness leave absences related to a previous occupational injury in Catalonia (Spain)].

    Science.gov (United States)

    Molinero-Ruiz, Emilia; Navarro, Albert; Moriña, David; Albertí-Casas, Constança; Jardí-Lliberia, Josefina; de Montserrat-Nonó, Jaume

    2015-01-01

    To estimate the frequency of non-work sickness absence (ITcc) related to previous occupational injuries with (ATB) or without (ATSB) sick leave. Prospective longitudinal study. Workers with ATB or ATSB notified to the Occupational Accident Registry of Catalonia were selected in the last term of 2009. They were followed-up for six months after returning to work (ATB) or after the accident (ATSB), by sex and occupation. Official labor and health authority registries were used as information sources. An "injury-associated ITcc" was defined when the sick leave occurred in the following six months and within the same diagnosis group. The absolute and relative frequency were calculated according to time elapsed and its duration (cumulated days, measures of central trend and dispersion), by diagnosis group or affected body area, as compared to all of Catalonia. 2,9%of ATB (n=627) had an injury-associated ITcc, with differences by diagnosis, sex and occupation; this was also the case for 2,1% of ATSB (n=496).With the same diagnosis, duration of ITcc was longer among those who had an associated injury, and with respect to all of Catalonia. Some of the under-reporting of occupational pathology corresponds to episodes initially recognized as being work-related. Duration of sickness absence depends not only on diagnosis and clinical course, but also on criteria established by the entities managing the case. This could imply that more complicated injuries are referred to the national health system, resulting in personal, legal, healthcare and economic cost consequences for all involved stakeholders. Copyright belongs to the Societat Catalana de Salut Laboral.

  17. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  18. M-estimator for the 3D symmetric Helmert coordinate transformation

    Science.gov (United States)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  19. Monocular distance estimation from optic flow during active landing maneuvers

    International Nuclear Information System (INIS)

    Van Breugel, Floris; Morgansen, Kristi; Dickinson, Michael H

    2014-01-01

    Vision is arguably the most widely used sensor for position and velocity estimation in animals, and it is increasingly used in robotic systems as well. Many animals use stereopsis and object recognition in order to make a true estimate of distance. For a tiny insect such as a fruit fly or honeybee, however, these methods fall short. Instead, an insect must rely on calculations of optic flow, which can provide a measure of the ratio of velocity to distance, but not either parameter independently. Nevertheless, flies and other insects are adept at landing on a variety of substrates, a behavior that inherently requires some form of distance estimation in order to trigger distance-appropriate motor actions such as deceleration or leg extension. Previous studies have shown that these behaviors are indeed under visual control, raising the question: how does an insect estimate distance solely using optic flow? In this paper we use a nonlinear control theoretic approach to propose a solution for this problem. Our algorithm takes advantage of visually controlled landing trajectories that have been observed in flies and honeybees. Finally, we implement our algorithm, which we term dynamic peering, using a camera mounted to a linear stage to demonstrate its real-world feasibility. (paper)

  20. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    Science.gov (United States)

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  2. An M-estimator of multivariate tail dependence

    NARCIS (Netherlands)

    Krajina, A.

    2010-01-01

    AN M-ESTIMATOR OF TAIL DEPENDENCE. Extreme value theory is the part of probability and statistics that provides the theoretical background for modeling events that almost never happen. The estimation of the dependence between two or more such unlikely events (tail dependence) is the topic of this

  3. Lamb shift in muonic hydrogen-I. Verification and update of theoretical predictions

    International Nuclear Information System (INIS)

    Jentschura, U.D.

    2011-01-01

    Research highlights: → The QED theory of muonic hydrogen energy levels is verified and updated. → Previously obtained results of Pachucki and Borie are confirmed. → The influence of the vacuum polarization potential onto the Bethe logarithm is calculated nonperturbatively. → A model-independent estimate of the Zemach moment correction is given. → Parametrically, the observed discrepancy of theory and experiment is shown to be substantial and large. - Abstract: In view of the recently observed discrepancy of theory and experiment for muonic hydrogen [R. Pohl et al., Nature 466 (2010) 213], we reexamine the theory on which the quantum electrodynamic (QED) predictions are based. In particular, we update the theory of the 2P-2S Lamb shift, by calculating the self-energy of the bound muon in the full Coulomb + vacuum polarization (Uehling) potential. We also investigate the relativistic two-body corrections to the vacuum polarization shift, and we analyze the influence of the shape of the nuclear charge distribution on the proton radius determination. The uncertainty associated with the third Zemach moment 3 > 2 in the determination of the proton radius from the measurement is estimated. An updated theoretical prediction for the 2S-2P transition is given.

  4. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  5. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  6. Theoretical treatment of charge transfer processes of relevance to astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Krstic, P.S.; Stancil, P.C.; Schultz, D.R.

    1997-12-01

    Charge transfer is an important process in many astrophysical and atmospheric environments. While numerous experimental and theoretical studies exist for H and He targets, data on other targets, particularly metals and molecules, are sparse. Using a variety of theoretical methods and computational techniques the authors are developing methods to estimate the cross sections for electron capture (charge transfer) in slow collisions of low charge state ions with heavy (Mg, Ca, Fe, Co, Ni and Zn) neutrals. In this ongoing work particular attention is paid to ascertaining the importance of double electron capture.

  7. Theoretical treatment of charge transfer processes of relevance to astrophysics

    International Nuclear Information System (INIS)

    Krstic, P.S.; Stancil, P.C.; Schultz, D.R.

    1997-12-01

    Charge transfer is an important process in many astrophysical and atmospheric environments. While numerous experimental and theoretical studies exist for H and He targets, data on other targets, particularly metals and molecules, are sparse. Using a variety of theoretical methods and computational techniques the authors are developing methods to estimate the cross sections for electron capture (charge transfer) in slow collisions of low charge state ions with heavy (Mg, Ca, Fe, Co, Ni and Zn) neutrals. In this ongoing work particular attention is paid to ascertaining the importance of double electron capture

  8. A fluorescent sensor based on dansyl-diethylenetriamine-thiourea conjugate: a through theoretical investigation

    International Nuclear Information System (INIS)

    Nguyen Khoa Hien; Nguyen Thi Ai Nhung; Duong Tuan Quang; Ho Quoc Dai; Nguyen Tien Trung

    2015-01-01

    A new dansyl-diethylenetriamine-thiourea conjugate (DT) for detection of Hg 2+ ions in aqueous solution has been theoretically designed and compared to our previously published results. The synthetic path, the optimized geometric structure and the characteristics of the DT were found by the theoretical calculations at the B3LYP/LanL2DZ level. Accordingly, the DT can react with Hg 2+ ion to form a product with quenched fluorescence. It is remarkable that the experimental results are in an excellent agreement with the theoretically evaluated data. (author)

  9. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo; Huang, Jianhua Z.; Ding, Yu

    2010-01-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  10. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo

    2010-10-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  11. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  12. The SENSE-Isomorphism Theoretical Image Voxel Estimation (SENSE-ITIVE) Model for Reconstruction and Observing Statistical Properties of Reconstruction Operators

    Science.gov (United States)

    Bruce, Iain P.; Karaman, M. Muge; Rowe, Daniel B.

    2012-01-01

    The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267–1287] that when the SENSE model is represented in terms of a real-valued isomorphism, it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful

  13. Left ventricular asynergy score as an indicator of previous myocardial infarction

    International Nuclear Information System (INIS)

    Backman, C.; Jacobsson, K.A.; Linderholm, H.; Osterman, G.

    1986-01-01

    Sixty-eight patients with coronary heart disease (CHD) i.e. a hisotry of angina of effort and/or previous 'possible infarction' were examined inter alia with ECG and cinecardioangiography. A system of scoring was designed which allowed a semiquantitative estimate of the left ventricular asynergy from cinecardioangiography - the left ventricular motion score (LVMS). The LVMS was associated with the presence of a previous myocardial infarction (MI), as indicated by the history and ECG findings. The ECG changes specific for a previous MI were associated with high LVMS values and unspecific or absent ECG changes with low LVMS values. Decision thresholds for ECG changes and asynergy in diagnosing a previous MI were evaluated by means of a ROC analysis. The accuracy of ECG in detecting a previous MI was slightly higher when asynergy indicated a 'true MI' than when autopsy result did so in a comparable group. Therefore the accuracy of asynergy (LVMS ≥ 1) in detecting a previous MI or myocardial fibrosis in patients with CHD should be at least comparable with that of autopsy (scar > 1 cm). (orig.)

  14. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    Science.gov (United States)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  15. THE DETECTION RATE OF EARLY UV EMISSION FROM SUPERNOVAE: A DEDICATED GALEX/PTF SURVEY AND CALIBRATED THEORETICAL ESTIMATES

    Energy Technology Data Exchange (ETDEWEB)

    Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer [Department of Particle Physics and Astrophysics, Faculty of Physics, The Weizmann Institute of Science, Rehovot 76100 (Israel); Kulkarni, Shrinivas R.; Kasliwal, Mansi M. [Cahill Center for Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States); Ben-Ami, Sagi [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Ctr. for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Chelouche, Doron; Rafter, Stephen [Physics Department, Faculty of Natural Sciences, University of Haifa, 31905 Haifa (Israel); Behar, Ehud; Laor, Ari [Physics Department, Technion Israel Institute of Technology, 32000 Haifa (Israel); Poznanski, Dovi; Nakar, Ehud; Maoz, Dan [School of Physics and Astronomy, Tel Aviv University, 69978 Tel Aviv (Israel); Trakhtenbrot, Benny [Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27 Zurich 8093 (Switzerland); Neill, James D.; Barlow, Thomas A.; Martin, Christofer D., E-mail: noam.ganot@gmail.com [California Institute of Technology, 1200 East California Boulevard, MC 278-17, Pasadena, CA 91125 (United States); Collaboration: ULTRASAT Science Team; WTTH consortium; GALEX Science Team; Palomar Transient Factory; and others

    2016-03-20

    The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R{sub ⊙}, explosion energies of 10{sup 51} erg, and ejecta masses of 10 M{sub ⊙}. Exploding blue supergiants and Wolf–Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (∼0.5 SN per deg{sup 2}), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.

  16. Estimating the total number of susceptibility variants underlying complex diseases from genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Hon-Cheong So

    2010-11-01

    Full Text Available Recently genome-wide association studies (GWAS have identified numerous susceptibility variants for complex diseases. In this study we proposed several approaches to estimate the total number of variants underlying these diseases. We assume that the variance explained by genetic markers (Vg follow an exponential distribution, which is justified by previous studies on theories of adaptation. Our aim is to fit the observed distribution of Vg from GWAS to its theoretical distribution. The number of variants is obtained by the heritability divided by the estimated mean of the exponential distribution. In practice, due to limited sample sizes, there is insufficient power to detect variants with small effects. Therefore the power was taken into account in fitting. Besides considering the most significant variants, we also tried to relax the significance threshold, allowing more markers to be fitted. The effects of false positive variants were removed by considering the local false discovery rates. In addition, we developed an alternative approach by directly fitting the z-statistics from GWAS to its theoretical distribution. In all cases, the "winner's curse" effect was corrected analytically. Confidence intervals were also derived. Simulations were performed to compare and verify the performance of different estimators (which incorporates various means of winner's curse correction and the coverage of the proposed analytic confidence intervals. Our methodology only requires summary statistics and is able to handle both binary and continuous traits. Finally we applied the methods to a few real disease examples (lipid traits, type 2 diabetes and Crohn's disease and estimated that hundreds to nearly a thousand variants underlie these traits.

  17. Information theoretic analysis of canny edge detection in visual communication

    Science.gov (United States)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  18. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    International Nuclear Information System (INIS)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO 2 and NO x emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner

  19. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  20. Methodology for estimating biomass energy potential and its application to Colombia

    International Nuclear Information System (INIS)

    Gonzalez-Salazar, Miguel Angel; Morini, Mirko; Pinelli, Michele; Spina, Pier Ruggero; Venturini, Mauro; Finkenrath, Matthias; Poganietz, Witold-Roger

    2014-01-01

    Highlights: • Methodology to estimate the biomass energy potential and its uncertainty at a country level. • Harmonization of approaches and assumptions in existing assessment studies. • The theoretical and technical biomass energy potential in Colombia are estimated in 2010. - Abstract: This paper presents a methodology to estimate the biomass energy potential and its associated uncertainty at a country level when quality and availability of data are limited. The current biomass energy potential in Colombia is assessed following the proposed methodology and results are compared to existing assessment studies. The proposed methodology is a bottom-up resource-focused approach with statistical analysis that uses a Monte Carlo algorithm to stochastically estimate the theoretical and the technical biomass energy potential. The paper also includes a proposed approach to quantify uncertainty combining a probabilistic propagation of uncertainty, a sensitivity analysis and a set of disaggregated sub-models to estimate reliability of predictions and reduce the associated uncertainty. Results predict a theoretical energy potential of 0.744 EJ and a technical potential of 0.059 EJ in 2010, which might account for 1.2% of the annual primary energy production (4.93 EJ)

  1. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  2. Frequency Analysis of Gradient Estimators in Volume Rendering

    NARCIS (Netherlands)

    Bentum, Marinus Jan; Lichtenbelt, Barthold B.A.; Malzbender, Tom

    1996-01-01

    Gradient information is used in volume rendering to classify and color samples along a ray. In this paper, we present an analysis of the theoretically ideal gradient estimator and compare it to some commonly used gradient estimators. A new method is presented to calculate the gradient at arbitrary

  3. IASI's sensitivity to near-surface carbon monoxide (CO): Theoretical analyses and retrievals on test cases

    Science.gov (United States)

    Bauduin, Sophie; Clarisse, Lieven; Theunissen, Michael; George, Maya; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François

    2017-03-01

    Separating concentrations of carbon monoxide (CO) in the boundary layer from the rest of the atmosphere with nadir satellite measurements is of particular importance to differentiate emission from transport. Although thermal infrared (TIR) satellite sounders are considered to have limited sensitivity to the composition of the near-surface atmosphere, previous studies show that they can provide information on CO close to the ground in case of high thermal contrast. In this work we investigate the capability of IASI (Infrared Atmospheric Sounding Interferometer) to retrieve near-surface CO concentrations, and we quantitatively assess the influence of thermal contrast on such retrievals. We present a 3-part analysis, which relies on both theoretical forward simulations and retrievals on real data, performed for a large range of negative and positive thermal contrast situations. First, we derive theoretically the IASI detection threshold of CO enhancement in the boundary layer, and we assess its dependence on thermal contrast. Then, using the optimal estimation formalism, we quantify the role of thermal contrast on the error budget and information content of near-surface CO retrievals. We demonstrate that, contrary to what is usually accepted, large negative thermal contrast values (ground cooler than air) lead to a better decorrelation between CO concentrations in the low and the high troposphere than large positive thermal contrast (ground warmer than the air). In the last part of the paper we use Mexico City and Barrow as test cases to contrast our theoretical predictions with real retrievals, and to assess the accuracy of IASI surface CO retrievals through comparisons to ground-based in-situ measurements.

  4. A theoretical model for predicting neutron fluxes for cyclic Neutron ...

    African Journals Online (AJOL)

    A theoretical model has been developed for prediction of thermal neutron fluxes required for cyclic irradiations of a sample to obtain the same activity previously used for the detection of any radionuclide of interest. The model is suitable for radiotracer production or for long-lived neutron activation products where the ...

  5. Satellite, climatological, and theoretical inputs for modeling of the diurnal cycle of fire emissions

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.

    2009-12-01

    The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.

  6. Digital Quantum Estimation

    Science.gov (United States)

    Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo

    2017-11-01

    Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.

  7. WAYS HIERARCHY OF ACCOUNTING ESTIMATES

    Directory of Open Access Journals (Sweden)

    ŞERBAN CLAUDIU VALENTIN

    2015-03-01

    Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.

  8. Estimation of DSGE Models under Diffuse Priors and Data-Driven Identification Constraints

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem of multimo......We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem...... the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out-of-sample forecast comparisons as well as Bayes factors lend support to the constrained model....

  9. Sufficient Condition for Estimation in Designing H∞ Filter-Based SLAM

    Directory of Open Access Journals (Sweden)

    Nur Aqilah Othman

    2015-01-01

    Full Text Available Extended Kalman filter (EKF is often employed in determining the position of mobile robot and landmarks in simultaneous localization and mapping (SLAM. Nonetheless, there are some disadvantages of using EKF, namely, the requirement of Gaussian distribution for the state and noises, as well as the fact that it requires the smallest possible initial state covariance. This has led researchers to find alternative ways to mitigate the aforementioned shortcomings. Therefore, this study is conducted to propose an alternative technique by implementing H∞ filter in SLAM instead of EKF. In implementing H∞ filter in SLAM, the parameters of the filter especially γ need to be properly defined to prevent finite escape time problem. Hence, this study proposes a sufficient condition for the estimation purposes. Two distinct cases of initial state covariance are analysed considering an indoor environment to ensure the best solution for SLAM problem exists along with considerations of process and measurement noises statistical behaviour. If the prescribed conditions are not satisfied, then the estimation would exhibit unbounded uncertainties and consequently results in erroneous inference about the robot and landmarks estimation. The simulation results have shown the reliability and consistency as suggested by the theoretical analysis and our previous findings.

  10. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  11. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Prasad, J; Souradeep, T

    2014-01-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  12. Backtesting Portfolio Value-at-Risk with Estimated Portfolio Weights

    OpenAIRE

    Pei Pei

    2010-01-01

    This paper theoretically and empirically analyzes backtesting portfolio VaR with estimation risk in an intrinsically multivariate framework. For the first time in the literature, it takes into account the estimation of portfolio weights in forecasting portfolio VaR and its impact on backtesting. It shows that the estimation risk from estimating the portfolio weights as well as that from estimating the multivariate dynamic model of asset returns make the existing methods in a univariate framew...

  13. Experimental and theoretical Compton profiles of Be, C and Al

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, Julio C., E-mail: jaguiar@arn.gob.a [Autoridad Regulatoria Nuclear, Av. Del Libertador 8250, C1429BNP, Buenos Aires (Argentina); Instituto de Fisica ' Arroyo Seco' , Facultad de Ciencias Exactas, U.N.C.P.B.A., Pinto 399, 7000 Tandil (Argentina); Di Rocco, Hector O. [Instituto de Fisica ' Arroyo Seco' , Facultad de Ciencias Exactas, U.N.C.P.B.A., Pinto 399, 7000 Tandil (Argentina); Arazi, Andres [Laboratorio TANDAR, Comision Nacional de Energia Atomica, Av. General Paz 1499, 1650 San Martin, Buenos Aires (Argentina)

    2011-02-01

    The results of Compton profile measurements, Fermi momentum determinations, and theoretical values obtained from a linear combination of Slater-type orbital (STO) for core electrons in beryllium; carbon and aluminium are presented. In addition, a Thomas-Fermi model is used to estimate the contribution of valence electrons to the Compton profile. Measurements were performed using monoenergetic photons of 59.54 keV provided by a low-intensity Am-241 {gamma}-ray source. Scattered photons were detected at 90{sup o} from the beam direction using a p-type coaxial high-purity germanium detector (HPGe). The experimental results are in good agreement with theoretical calculations.

  14. Theoretical Study on the Flow of Refilling Stage in a Safety Injection Tank

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jun Sang [Halla Univ. Daejeon (Korea, Republic of)

    2017-10-15

    In this study, a theoretical analysis was performed to the flow of refilling stage in a safety injection tank, which is the core cooling system of nuclear power plant in an emergency. A theoretical model was proposed with a nonlinear governing equation defining on the flow of the refilling process of the coolant. Utilizing the Taylor-series expansion, the 1st - order approximation flow equation was obtained, along with its analytic solution of closed type, which could predict accurately the variations of free surface height and flow rate of the coolant. The availability of theoretical result was confirmed by comparing with previous experimental results.

  15. Theoretical NMR and conformational analysis of solvated oximes for organophosphates-inhibited acetylcholinesterase reactivation

    Science.gov (United States)

    da Silva, Jorge Alberto Valle; Modesto-Costa, Lucas; de Koning, Martijn C.; Borges, Itamar; França, Tanos Celmar Costa

    2018-01-01

    In this work, quaternary and non-quaternary oximes designed to bind at the peripheral site of acetylcholinesterase previously inhibited by organophosphates were investigated theoretically. Some of those oximes have a large number of degrees of freedom, thus requiring an accurate method to obtain molecular geometries. For this reason, the density functional theory (DFT) was employed to refine their molecular geometries after conformational analysis and to compare their 1H and 13C nuclear magnetic resonance (NMR) theoretical signals in gas-phase and in solvent. A good agreement with experimental data was achieved and the same theoretical approach was employed to obtain the geometries in water environment for further studies.

  16. The heat of formation of the acetyl cation: a theoretical evaluation

    Science.gov (United States)

    Smith, Brian J.; Radom, Leo

    1990-12-01

    Ab initio molecular orbital calculations have been used to obtain the heat of formation of the acetyl cation. In one set of calculations, the reverse activation barrier for the production of acetyl cation from acetaldehyde has been shown to be significantly different zero and the value obtained (9.8 kJ mol-1 at 298 K) has been used to correct the [Delta]Hof298 (CH3CO+) value derived from appearance energy measurements. In a second set of calculations, [Delta]H°f298 (CH3CO+) has been obtained from the calculated heats of a number of reactions involving the acetyl cation together with experimental heats of formation for the species involved. The best theoretical estimate for [Delta]H°f298 (CH3CO+), obtained as a mean of results from the two approaches, is 658 kJ mol-1. The best theoretical estimate for [Delta]H°f0(CH3CO+), obtained in a similar manner, is 665 kJ mol-1.

  17. Hybrid rocket engine, theoretical model and experiment

    Science.gov (United States)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  18. Theoretically Guided Analytical Method Development and Validation for the Estimation of Rifampicin in a Mixture of Isoniazid and Pyrazinamide by UV Spectrophotometer.

    Science.gov (United States)

    Khan, Mohammad F; Rita, Shamima A; Kayser, Md Shahidulla; Islam, Md Shariful; Asad, Sharmeen; Bin Rashid, Ridwan; Bari, Md Abdul; Rahman, Muhammed M; Al Aman, D A Anwar; Setu, Nurul I; Banoo, Rebecca; Rashid, Mohammad A

    2017-01-01

    A simple, rapid, economic, accurate, and precise method for the estimation of rifampicin in a mixture of isoniazid and pyrazinamide by UV spectrophotometeric technique (guided by the theoretical investigation of physicochemical properties) was developed and validated. Theoretical investigations revealed that isoniazid and pyrazinamide both were freely soluble in water and slightly soluble in ethyl acetate whereas rifampicin was practically insoluble in water but freely soluble in ethyl acetate. This indicates that ethyl acetate is an effective solvent for the extraction of rifampicin from a water mixture of isoniazid and pyrazinamide. Computational study indicated that pH range of 6.0-8.0 would favor the extraction of rifampicin. Rifampicin is separated from isoniazid and pyrazinamide at pH 7.4 ± 0.1 by extracting with ethyl acetate. The ethyl acetate was then analyzed at λ max of 344.0 nm. The developed method was validated for linearity, accuracy and precision according to ICH guidelines. The proposed method exhibited good linearity over the concentration range of 2.5-35.0 μg/mL. The intraday and inter-day precision in terms of % RSD ranged from 1.09 to 1.70% and 1.63 to 2.99%, respectively. The accuracy (in terms of recovery) of the method varied from of 96.7 ± 0.9 to 101.1 ± 0.4%. The LOD and LOQ were found to be 0.83 and 2.52 μg/mL, respectively. In addition, the developed method was successfully applied to determine rifampicin combination (isoniazid and pyrazinamide) brands available in Bangladesh.

  19. Estimation of Correlation Functions by Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....

  20. Theoretical relation between halo current-plasma energy displacement/deformation in EAST

    Science.gov (United States)

    Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen

    2018-04-01

    In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak.

  1. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    Science.gov (United States)

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  2. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....

  3. The Padé approximant in theoretical physics

    CERN Document Server

    Baker, George Allen

    1970-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  4. Field-widened Michelson interferometer for spectral discrimination in high-spectral-resolution lidar: theoretical framework.

    Science.gov (United States)

    Cheng, Zhongtao; Liu, Dong; Luo, Jing; Yang, Yongying; Zhou, Yudi; Zhang, Yupeng; Duan, Lulin; Su, Lin; Yang, Liming; Shen, Yibing; Wang, Kaiwei; Bai, Jian

    2015-05-04

    A field-widened Michelson interferometer (FWMI) is developed to act as the spectral discriminator in high-spectral-resolution lidar (HSRL). This realization is motivated by the wide-angle Michelson interferometer (WAMI) which has been used broadly in the atmospheric wind and temperature detection. This paper describes an independent theoretical framework about the application of the FWMI in HSRL for the first time. In the framework, the operation principles and application requirements of the FWMI are discussed in comparison with that of the WAMI. Theoretical foundations for designing this type of interferometer are introduced based on these comparisons. Moreover, a general performance estimation model for the FWMI is established, which can provide common guidelines for the performance budget and evaluation of the FWMI in the both design and operation stages. Examples incorporating many practical imperfections or conditions that may degrade the performance of the FWMI are given to illustrate the implementation of the modeling. This theoretical framework presents a complete and powerful tool for solving most of theoretical or engineering problems encountered in the FWMI application, including the designing, parameter calibration, prior performance budget, posterior performance estimation, and so on. It will be a valuable contribution to the lidar community to develop a new generation of HSRLs based on the FWMI spectroscopic filter.

  5. Almost Free Modules Set-Theoretic Methods

    CERN Document Server

    Eklof, PC

    1990-01-01

    This is an extended treatment of the set-theoretic techniques which have transformed the study of abelian group and module theory over the last 15 years. Part of the book is new work which does not appear elsewhere in any form. In addition, a large body of material which has appeared previously (in scattered and sometimes inaccessible journal articles) has been extensively reworked and in many cases given new and improved proofs. The set theory required is carefully developed with algebraists in mind, and the independence results are derived from explicitly stated axioms. The book contains exe

  6. Single-snapshot DOA estimation by using Compressed Sensing

    Science.gov (United States)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  7. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  8. Theoretical, observational, and isotopic estimates of the lifetime of the solar nebula

    Science.gov (United States)

    Podosek, Frank A.; Cassen, Patrick

    1994-01-01

    There are a variety of isotopic data for meteorites which suggest that the protostellar nebula existed and was involved in making planetary materials for some 10(exp 7) yr or more. Many cosmochemists, however, advocate alternative interpretations of such data in order to comply with a perceived constraint, from theoretical considerations, that the nebula existed only for a much shorter time, usually stated as less than or equal to 10(exp 6) yr. In this paper, we review evidence relevant to solar nebula duration which is available through three different disciplines: theoretical modeling of star formation, isotopic data from meteorites, and astronomical observations of T Tauri stars. Theoretical models based on observations of present star-forming regions indicate that stars like the Sun form by dynamical gravitational collapse of dense cores of cold molcular clouds in the interstellar clouds in the interstellar medium. The collapse to a star and disk occurs rapidly, on a time scale of the order 10(exp 5) yr. Disks evolve by dissipating energy while redistributing angular momentum, but it is difficult to predict the rate of evolution, particularly for low mass (compared to the star) disks which nonetheless still contain enough material to account for the observed planetary system. There is no compelling evidence, from available theories of disk structure and evolution, that the solar nebula must have evolved rapidly and could not have persisted for more than 1 Ma. In considering chronoloically relevant isotopic data for meteorites, we focus on three methodologies: absolute ages by U-Pb/Pb-Pb, and relative ages by short-lived radionuclides (especially Al-26) and by evolution of Sr-87/Sr-86. Two kinds of meteoritic materials-refractory inclusions such as CAIs and differential meteorites (eucrites and augrites) -- appear to have experienced potentially dateable nebular events. In both cases, the most straightforward interpretations of the available data indicate

  9. Reducing Inventory System Costs by Using Robust Demand Estimators

    OpenAIRE

    Raymond A. Jacobs; Harvey M. Wagner

    1989-01-01

    Applications of inventory theory typically use historical data to estimate demand distribution parameters. Imprecise knowledge of the demand distribution adds to the usual replenishment costs associated with stochastic demands. Only limited research has been directed at the problem of choosing cost effective statistical procedures for estimating these parameters. Available theoretical findings on estimating the demand parameters for (s, S) inventory replenishment policies are limited by their...

  10. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V.

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  11. Prediction of successful trial of labour in patients with a previous caesarean section

    International Nuclear Information System (INIS)

    Shaheen, N.; Khalil, S.; Iftikhar, P.

    2014-01-01

    Objective: To determine the prediction rate of success in trial of labour after one previous caesarean section. Methods: The cross-sectional study was conducted at the Department of Obstetrics and Gynaecology, Cantonment General Hospital, Rawalpindi, from January 1, 2012 to January 31, 2013, and comprised women with one previous Caesarean section and with single alive foetus at 37-41 weeks of gestation. Women with more than one Caesarean section, unknown site of uterine scar, bony pelvic deformity, placenta previa, intra-uterine growth restriction, deep transverse arrest in previous labour and non-reassuring foetal status at the time of admission were excluded. Intrapartum risk assessment included Bishop score at admission, rate of cervical dilatation and scar tenderness. SPSS 21 was used for statistical analysis. Results: Out of a total of 95 women, the trial was successful in 68 (71.6%). Estimated foetal weight and number of prior vaginal deliveries had a high predictive value for successful trial of labour after Caesarean section. Estimated foetal weight had an odds ratio of 0.46 (p<0.001), while number of prior vaginal deliveries had an odds ratio of 0.85 with (p=0.010). Other factors found to be predictive of successful trial included Bishop score at the time of admission (p<0.037) and rate of cervical dilatation in the first stage of labour (p<0.021). Conclusion: History of prior vaginal deliveries, higher Bishop score at the time of admission, rapid rate of cervical dilatation and lower estimated foetal weight were predictive of a successful trial of labour after Caesarean section. (author)

  12. A theoretical model for prediction of deposition efficiency in cold spraying

    International Nuclear Information System (INIS)

    Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.

    2005-01-01

    The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle

  13. Theoretical repeatability assessment without repetitive measurements in gradient high-performance liquid chromatography.

    Science.gov (United States)

    Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki

    2016-07-08

    This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Experimental and theoretical studies of near-ground acoustic radiation propagation in the atmosphere

    Science.gov (United States)

    Belov, Vladimir V.; Burkatovskaya, Yuliya B.; Krasnenko, Nikolai P.; Rakov, Aleksandr S.; Rakov, Denis S.; Shamanaeva, Liudmila G.

    2017-11-01

    Results of experimental and theoretical studies of the process of near-ground propagation of monochromatic acoustic radiation on atmospheric paths from a source to a receiver taking into account the contribution of multiple scattering on fluctuations of atmospheric temperature and wind velocity, refraction of sound on the wind velocity and temperature gradients, and its reflection by the underlying surface for different models of the atmosphere depending the sound frequency, coefficient of reflection from the underlying surface, propagation distance, and source and receiver altitudes are presented. Calculations were performed by the Monte Carlo method using the local estimation algorithm by the computer program developed by the authors. Results of experimental investigations under controllable conditions are compared with theoretical estimates and results of analytical calculations for the Delany-Bazley impedance model. Satisfactory agreement of the data obtained confirms the correctness of the suggested computer program.

  15. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  16. Estimation of strong ground motion

    International Nuclear Information System (INIS)

    Watabe, Makoto

    1993-01-01

    Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event

  17. A novel approach for absolute radar calibration: formulation and theoretical validation

    Directory of Open Access Journals (Sweden)

    C. Merker

    2015-06-01

    Full Text Available The theoretical framework of a novel approach for absolute radar calibration is presented and its potential analysed by means of synthetic data to lay out a solid basis for future practical application. The method presents the advantage of an absolute calibration with respect to the directly measured reflectivity, without needing a previously calibrated reference device. It requires a setup comprising three radars: two devices oriented towards each other, measuring reflectivity along the same horizontal beam and operating within a strongly attenuated frequency range (e.g. K or X band, and one vertical reflectivity and drop size distribution (DSD profiler below this connecting line, which is to be calibrated. The absolute determination of the calibration factor is based on attenuation estimates. Using synthetic, smooth and geometrically idealised data, calibration is found to perform best using homogeneous precipitation events with rain rates high enough to ensure a distinct attenuation signal (reflectivity above ca. 30 dBZ. Furthermore, the choice of the interval width (in measuring range gates around the vertically pointing radar, needed for attenuation estimation, is found to have an impact on the calibration results. Further analysis is done by means of synthetic data with realistic, inhomogeneous precipitation fields taken from measurements. A calibration factor is calculated for each considered case using the presented method. Based on the distribution of the calculated calibration factors, the most probable value is determined by estimating the mode of a fitted shifted logarithmic normal distribution function. After filtering the data set with respect to rain rate and inhomogeneity and choosing an appropriate length of the considered attenuation path, the estimated uncertainty of the calibration factor is of the order of 1 to 11 %, depending on the chosen interval width. Considering stability and accuracy of the method, an interval of

  18. Is BAMM Flawed? Theoretical and Practical Concerns in the Analysis of Multi-Rate Diversification Models.

    Science.gov (United States)

    Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan

    2017-07-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for

  19. Estimation error algorithm at analysis of beta-spectra

    International Nuclear Information System (INIS)

    Bakovets, N.V.; Zhukovskij, A.I.; Zubarev, V.N.; Khadzhinov, E.M.

    2005-01-01

    This work describes the estimation error algorithm at the operations with beta-spectrums, as well as compares the theoretical and experimental errors by the processing of beta-channel's data. (authors)

  20. Experimental and theoretical analysis of cracking in drying soils

    OpenAIRE

    Lakshmikantha, M.R.

    2009-01-01

    The thesis focuses on the experimental and theoretical aspects of the process of cracking in drying soils. The results and conclusions were drawn from an exhaustive experimental campaign characterised by innovative multidisciplinary aspects incorporating Fracture Mechanics and classical Soil mechanics, aided with image analysis techniques. A detailed study of the previous works on the topic showed the absence of large scale fully monitored laboratory tests, while the existing studies were per...

  1. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model

    OpenAIRE

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of p...

  2. Honesty-humility in contemporary students: manipulations of self-image by inflated IQ estimations.

    Science.gov (United States)

    Kajonius, P J

    2014-08-01

    The HEXACO model offers a complement to the Big Five model, including a sixth factor, Honesty-Humility, and its four facets (Sincerity, Fairness, Greed-avoidance, and Modesty). The four facets of Honesty-Humility and three indicators of intelligence (one performance-based cognitive ability test, one self-estimated academic potential, and one self-report of previous IQ test results) were assessed in students entering higher education (N = 187). A significant negative correlation was observed between Honesty-Humility and self-reported intelligence (r = -.37), most evident in the Modesty facet. These results may be interpreted as tendencies of exaggeration, using a theoretical frame of psychological image-management, concluding that the Honesty-Humility trait captures students' self-ambitions, particularly within the context of an individualistic, competitive culture such as Sweden.

  3. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  4. Theoretical investigation of shock stand-off distance for non-equilibrium flows over spheres

    KAUST Repository

    Shen, Hua; WEN, Chih-Yung

    2018-01-01

    We derived a theoretical solution of the shock stand-off distance for a non-equilibrium flow over spheres based on Wen and Hornung’s solution and Olivier’s solution. Compared with previous approaches, the main advantage of the present approach

  5. A theoretical framework to study variations in workplace violence experienced by emergency responders

    NARCIS (Netherlands)

    L. van Reemst (Lisa)

    2016-01-01

    markdownabstractEmergency responders are often sent to the front line and are often confronted with aggression and violence in interaction with citizens. According to previous studies, some professionals experience more workplace violence than others. In this article, the theoretical framework to

  6. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  7. Economic impact of feeding a phenylalanine-restricted diet to adults with previously untreated phenylketonuria.

    Science.gov (United States)

    Brown, M C; Guest, J F

    1999-02-01

    The aim of the present study was to estimate the direct healthcare cost of managing adults with previously untreated phenylketonuria (PKU) for one year before any dietary restrictions and for the first year after a phenylalanine- (PHE-) restricted diet was introduced. The resource use and corresponding costs were estimated from medical records and interviews with health care professionals experienced in caring for adults with previously untreated PKU. The mean annual cost of caring for a client being fed an unrestricted diet was estimated to be 83 996 pound silver. In the first year after introducing a PHE-restricted diet, the mean annual cost was reduced by 20 647 pound silver to 63 348 pound silver as a result of a reduction in nursing time, hospitalizations, outpatient clinic visits and medications. However, the economic benefit of the diet depended on whether the clients were previously high or low users of nursing care. Nursing time was the key cost-driver, accounting for 79% of the cost of managing high users and 31% of the management cost for low users. In contrast, the acquisition cost of a PHE-restricted diet accounted for up to 6% of the cost for managing high users and 15% of the management cost for low users. Sensitivity analyses showed that introducing a PHE-restricted diet reduces the annual cost of care, provided that annual nursing time was reduced by more than 8% or more than 5% of clients respond to the diet. The clients showed fewer negative behaviours when being fed a PHE-restricted diet, which may account for the observed reduction in nursing time needed to care for these clients. In conclusion, feeding a PHE-restricted diet to adults with previously untreated PKU leads to economic benefits to the UK's National Health Service and society in general.

  8. Theoretical study on the inverse modeling of deep body temperature measurement

    International Nuclear Information System (INIS)

    Huang, Ming; Chen, Wenxi

    2012-01-01

    We evaluated the theoretical aspects of monitoring the deep body temperature distribution with the inverse modeling method. A two-dimensional model was built based on anatomical structure to simulate the human abdomen. By integrating biophysical and physiological information, the deep body temperature distribution was estimated from cutaneous surface temperature measurements using an inverse quasilinear method. Simulations were conducted with and without the heat effect of blood perfusion in the muscle and skin layers. The results of the simulations showed consistently that the noise characteristics and arrangement of the temperature sensors were the major factors affecting the accuracy of the inverse solution. With temperature sensors of 0.05 °C systematic error and an optimized 16-sensor arrangement, the inverse method could estimate the deep body temperature distribution with an average absolute error of less than 0.20 °C. The results of this theoretical study suggest that it is possible to reconstruct the deep body temperature distribution with the inverse method and that this approach merits further investigation. (paper)

  9. Dynamic estimator for determining operating conditions in an internal combustion engine

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-01-05

    Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.

  10. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain

    Directory of Open Access Journals (Sweden)

    Xiaojin Li

    2013-01-01

    Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  11. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  12. How to prevent type 2 diabetes in women with previous gestational diabetes?

    DEFF Research Database (Denmark)

    Pedersen, Anne Louise Winkler; Terkildsen Maindal, Helle; Juul, Lise

    2017-01-01

    OBJECTIVES: Women with previous gestational diabetes (GDM) have a seven times higher risk of developing type 2 diabetes (T2DM) than women without. We aimed to review the evidence of effective behavioural interventions seeking to prevent T2DM in this high-risk group. METHODS: A systematic review...... of RCTs in several databases in March 2016. RESULTS: No specific intervention or intervention components were found superior. The pooled effect on diabetes incidence (four trials) was estimated to: -5.02 per 100 (95% CI: -9.24; -0.80). CONCLUSIONS: This study indicates that intervention is superior...... to no intervention in prevention of T2DM among women with previous GDM....

  13. Channel estimation in DFT-based offset-QAM OFDM systems.

    Science.gov (United States)

    Zhao, Jian

    2014-10-20

    Offset quadrature amplitude modulation (offset-QAM) orthogonal frequency division multiplexing (OFDM) exhibits enhanced net data rates compared to conventional OFDM, and reduced complexity compared to Nyquist FDM (N-FDM). However, channel estimation in discrete-Fourier-transform (DFT) based offset-QAM OFDM is different from that in conventional OFDM and requires particular study. In this paper, we derive a closed-form expression for the demultiplexed signal in DFT-based offset-QAM systems and show that although the residual crosstalk is orthogonal to the decoded signal, its existence degrades the channel estimation performance when the conventional least-square method is applied. We propose and investigate four channel estimation algorithms for offset-QAM OFDM that vary in terms of performance, complexity, and tolerance to system parameters. It is theoretically and experimentally shown that simple channel estimation can be realized in offset-QAM OFDM with the achieved performance close to the theoretical limit. This, together with the existing advantages over conventional OFDM and N-FDM, makes this technology very promising for optical communication systems.

  14. Do Indonesian Children's Experiences with Large Currency Units Facilitate Magnitude Estimation of Long Temporal Periods?

    Science.gov (United States)

    Cheek, Kim A.

    2017-08-01

    Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.

  15. A Theoretical Framework to Study Variations in Workplace Violence Experienced by Emergency Responders

    NARCIS (Netherlands)

    L. van Reemst (Lisa)

    2016-01-01

    textabstractEmergency responders are often sent to the front line and are often confronted with aggression and violence in inter- action with citizens. According to previous studies, some professionals experience more workplace violence than others. In this article, the theoretical framework to

  16. Estimating Functions of Distributions Defined over Spaces of Unknown Size

    Directory of Open Access Journals (Sweden)

    David H. Wolpert

    2013-10-01

    Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.

  17. Representational Change and Children's Numerical Estimation

    Science.gov (United States)

    Opfer, John E.; Siegler, Robert S.

    2007-01-01

    We applied overlapping waves theory and microgenetic methods to examine how children improve their estimation proficiency, and in particular how they shift from reliance on immature to mature representations of numerical magnitude. We also tested the theoretical prediction that feedback on problems on which the discrepancy between two…

  18. INFANTILISM: THEORETICAL CONSTRUCT AND OPERATIONALIZATION

    Directory of Open Access Journals (Sweden)

    Yelena V. Sabelnikova

    2016-01-01

    Full Text Available The aim of the presented research is to define and operationalize theoretically the concept of infantilism and its construct. The content of theoretical construct «infantilism» is analyzed. Methods. The methods of theoretical research involve analysis and synthesis. The age and content criteria are analysed in the context of childhood and adulthood. The traits which can be interpreted as adult infantile traits are described. Results. The characteristics of adult infantilism in modern world taking into account the increasing of information flows and socio-economic changes are defined. The definition of the concept «infantilism» including its main features is given. Infantilism is defined as the personal organization including features and models of the previous age period not adequate for the real age stage with emphasis on immaturity of the emotional and volitional sphere. Scientific novelty. The main psychological characteristics of adulthood are described as the reflection, requirement to work and professional activity, existence of professional self-determination, possession of labor skills, need for selfrealization, maturity of the emotional and volitional sphere. As objective adulthood characteristics are considered the following: transition to economic and territorial independence of a parental family, and also development of new social roles, such as a worker, spouse, and parent. Two options of a possible operationalization of concept are allocated: objective (existence / absence in real human life of objective criteria of adulthood and subjective (the self-report on subjective feeling of existence / lack of psychological characteristics of adulthood. Practical significance consists in a construct operationalization of «infantilism» which at the moment has so many interpretations. That operationalization is necessary for the further analysis and carrying out various researches. 

  19. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  20. Estimating the effect of current, previous and never use of drugs in studies based on prescription registries

    DEFF Research Database (Denmark)

    Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms

    2009-01-01

    of this misclassification for analysing the risk of breast cancer. MATERIALS AND METHODS: Prescription data were obtained from Danish Registry of Medicinal Products Statistics and we applied various methods to approximate treatment episodes. We analysed the duration of HT episodes to study the ability to identify......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...

  1. Cost-estimating relationships for space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  2. Information-Theoretic Bounded Rationality and ε-Optimality

    Directory of Open Access Journals (Sweden)

    Daniel A. Braun

    2014-08-01

    Full Text Available Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple ε-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an ε-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of ε-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on ε-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.

  3. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

  4. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    The Monte Carlo code MCNP has three different, but correlated, estimators for calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the estimator with the smallest variance. Empirically, MCNP examples for several physical systems demonstrate the three-combined estimator's superiority over each of the three individual estimators and its correct coverage rates. Additionally, the importance of MCNP's statistical checks is demonstrated

  5. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach

    International Nuclear Information System (INIS)

    Olofsson, Isabelle; Fredriksson, Anders

    2005-05-01

    The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one

  6. Performance evaluation of the spectral centroid downshift method for attenuation estimation.

    Science.gov (United States)

    Samimi, Kayvan; Varghese, Tomy

    2015-05-01

    Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequencydomain approaches applied to this problem. In this study, a statistical analysis of this method's performance was carried out based on a parametric model of the signal power spectrum in the presence of electronic noise. The parametric model used for the power spectrum of received RF data assumes a Gaussian spectral profile for the transmit pulse, and incorporates effects of attenuation, windowing, and electronic noise. Spectral moments were calculated and used to estimate second-order centroid statistics. A theoretical expression for the variance of a maximum likelihood estimator of attenuation coefficient was derived in terms of the centroid statistics and other model parameters, such as transmit pulse center frequency and bandwidth, RF data window length, SNR, and number of regression points. Theoretically predicted estimation variances were compared with experimentally estimated variances on RF data sets from both computer-simulated and physical tissue-mimicking phantoms. Scan parameter ranges for this study were electronic SNR from 10 to 70 dB, transmit pulse standard deviation from 0.5 to 4.1 MHz, transmit pulse center frequency from 2 to 8 MHz, and data window length from 3 to 17 mm. Acceptable agreement was observed between theoretical predictions and experimentally estimated values with differences smaller than 0.05 dB/cm/MHz across the parameter ranges investigated. This model helps predict the best attenuation estimation variance achievable with the CDS method, in terms of said scan parameters.

  7. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    Science.gov (United States)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  8. Theoretical model for the mechanical behavior of prestressed beams under torsion

    Directory of Open Access Journals (Sweden)

    Sérgio M.R. Lopes

    2014-12-01

    Full Text Available In this article, a global theoretical model previously developed and validated by the authors for reinforced concrete beams under torsion is reviewed and corrected in order to predict the global behavior of beams under torsion with uniform longitudinal prestress. These corrections are based on the introduction of prestress factors and on the modification of the equilibrium equations in order to incorporate the contribution of the prestressing reinforcement. The theoretical results obtained with the new model are compared with some available results of prestressed concrete (PC beams under torsion found in the literature. The results obtained in this study validate the proposed computing procedure to predict the overall behavior of PC beams under torsion.

  9. The theoretical tensile strength of fcc crystals predicted from shear strength calculations

    International Nuclear Information System (INIS)

    Cerny, M; Pokluda, J

    2009-01-01

    This work presents a simple way of estimating uniaxial tensile strength on the basis of theoretical shear strength calculations, taking into account its dependence on a superimposed normal stress. The presented procedure enables us to avoid complicated and time-consuming analyses of elastic stability of crystals under tensile loading. The atomistic simulations of coupled shear and tensile deformations in cubic crystals are performed using first principles computational code based on pseudo-potentials and the plane wave basis set. Six fcc crystals are subjected to shear deformations in convenient slip systems and a special relaxation procedure controls the stress tensor. The obtained dependence of the ideal shear strength on the normal tensile stress seems to be almost linearly decreasing for all investigated crystals. Taking these results into account, the uniaxial tensile strength values in three crystallographic directions were evaluated by assuming a collapse of the weakest shear system. Calculated strengths for and loading were found to be mostly lower than previously calculated stresses related to tensile instability but rather close to those obtained by means of the shear instability analysis. On the other hand, the strengths for loading almost match the stresses related to tensile instability.

  10. Theoretical physics 3 electrodynamics

    CERN Document Server

    Nolting, Wolfgang

    2016-01-01

    This textbook offers a clear and comprehensive introduction to electrodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series. The first part of the book describes the interaction of electric charges and magnetic moments by introducing electro- and magnetostatics. The second part of the book establishes deeper understanding of electrodynamics with the Maxwell equations, quasistationary fields and electromagnetic fields. All sections are accompanied by a detailed introduction to the math needed. Ideally suited to undergraduate students with some grounding in classical and analytical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful Germa...

  11. Theoretical physics 5 thermodynamics

    CERN Document Server

    Nolting, Wolfgang

    2017-01-01

    This concise textbook offers a clear and comprehensive introduction to thermodynamics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, defining macroscopic variables, such as internal energy, entropy and pressure,together with thermodynamic principles. The first part of the book introduces the laws of thermodynamics and thermodynamic potentials. More complex themes are covered in the second part of the book, which describes phases and phase transitions in depth. Ideally suited to undergraduate students with some grounding in classical mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this series cove...

  12. Weak Learner Method for Estimating River Discharges using Remotely Sensed Data: Central Congo River as a Testbed

    Science.gov (United States)

    Kim, D.; Lee, H.; Yu, H.; Beighley, E.; Durand, M. T.; Alsdorf, D. E.; Hwang, E.

    2017-12-01

    River discharge is a prerequisite for an understanding of flood hazard and water resource management, yet we have poor knowledge of it, especially over remote basins. Previous studies have successfully used a classic hydraulic geometry, at-many-stations hydraulic geometry (AMHG), and Manning's equation to estimate the river discharge. Theoretical bases of these empirical methods were introduced by Leopold and Maddock (1953) and Manning (1889), and those have been long used in the field of hydrology, water resources, and geomorphology. However, the methods to estimate the river discharge from remotely sensed data essentially require bathymetric information of the river or are not applicable to braided rivers. Furthermore, the methods used in the previous studies adopted assumptions of river conditions to be steady and uniform. Consequently, those methods have limitations in estimating the river discharge in complex and unsteady flow in nature. In this study, we developed a novel approach to estimating river discharges by applying the weak learner method (here termed WLQ), which is one of the ensemble methods using multiple classifiers, to the remotely sensed measurements of water levels from Envisat altimetry, effective river widths from PALSAR images, and multi-temporal surface water slopes over a part of the mainstem Congo. Compared with the methods used in the previous studies, the root mean square error (RMSE) decreased from 5,089 m3s-1 to 3,701 m3s-1, and the relative RMSE (RRMSE) improved from 12% to 8%. It is expected that our method can provide improved estimates of river discharges in complex and unsteady flow conditions based on the data-driven prediction model by machine learning (i.e. WLQ), even when the bathymetric data is not available or in case of the braided rivers. Moreover, it is also expected that the WLQ can be applied to the measurements of river levels, slopes and widths from the future Surface Water Ocean Topography (SWOT) mission to be

  13. An in-depth analysis of theoretical frameworks for the study of care coordination

    Directory of Open Access Journals (Sweden)

    Sabine Van Houdt

    2013-06-01

    Full Text Available Introduction: Complex chronic conditions often require long-term care from various healthcare professionals. Thus, maintaining quality care requires care coordination. Concepts for the study of care coordination require clarification to develop, study and evaluate coordination strategies. In 2007, the Agency for Healthcare Research and Quality defined care coordination and proposed five theoretical frameworks for exploring care coordination. This study aimed to update current theoretical frameworks and clarify key concepts related to care coordination. Methods: We performed a literature review to update existing theoretical frameworks. An in-depth analysis of these theoretical frameworks was conducted to formulate key concepts related to care coordination.Results: Our literature review found seven previously unidentified theoretical frameworks for studying care coordination. The in-depth analysis identified fourteen key concepts that the theoretical frameworks addressed. These were ‘external factors’, ‘structure’, ‘tasks characteristics’, ‘cultural factors’, ‘knowledge and technology’, ‘need for coordination’, ‘administrative operational processes’, ‘exchange of information’, ‘goals’, ‘roles’, ‘quality of relationship’, ‘patient outcome’, ‘team outcome’, and ‘(interorganizational outcome’.Conclusion: These 14 interrelated key concepts provide a base to develop or choose a framework for studying care coordination. The relational coordination theory and the multi-level framework are interesting as these are the most comprehensive.

  14. A Theoretical Framework for Soft-Information-Based Synchronization in Iterative (Turbo Receivers

    Directory of Open Access Journals (Sweden)

    Lottici Vincenzo

    2005-01-01

    Full Text Available This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such "turbo-estimation" approach can be regarded as a special case of the expectation-maximization (EM algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal.

  15. The influence of selection on the evolutionary distance estimated from the base changes observed between homologous nucleotide sequences.

    Science.gov (United States)

    Otsuka, J; Kawai, Y; Sugaya, N

    2001-11-21

    In most studies of molecular evolution, the nucleotide base at a site is assumed to change with the apparent rate under functional constraint, and the comparison of base changes between homologous genes is thought to yield the evolutionary distance corresponding to the site-average change rate multiplied by the divergence time. However, this view is not sufficiently successful in estimating the divergence time of species, but mostly results in the construction of tree topology without a time-scale. In the present paper, this problem is investigated theoretically by considering that observed base changes are the results of comparing the survivals through selection of mutated bases. In the case of weak selection, the time course of base changes due to mutation and selection can be obtained analytically, leading to a theoretical equation showing how the selection has influence on the evolutionary distance estimated from the enumeration of base changes. This result provides a new method for estimating the divergence time more accurately from the observed base changes by evaluating both the strength of selection and the mutation rate. The validity of this method is verified by analysing the base changes observed at the third codon positions of amino acid residues with four-fold codon degeneracy in the protein genes of mammalian mitochondria; i.e. the ratios of estimated divergence times are fairly well consistent with a series of fossil records of mammals. Throughout this analysis, it is also suggested that the mutation rates in mitochondrial genomes are almost the same in different lineages of mammals and that the lineage-specific base-change rates indicated previously are due to the selection probably arising from the preference of transfer RNAs to codons.

  16. Theoretical calculations of positron lifetimes for metal oxides

    International Nuclear Information System (INIS)

    Mizuno, Masataka; Araki, Hideki; Shirai, Yasuharu

    2004-01-01

    Our recent positron lifetime measurements for metal oxides suggest that positron lifetimes of bulk state in metal oxides are shorter than previously reported values. We have performed theoretical calculations of positron lifetimes for bulk and vacancy states in MgO and ZnO using first-principles electronic structure calculations and discuss the validity of positron lifetime calculations for insulators. By comparing the calculated positron lifetimes to the experimental values, it wa found that the semiconductor model well reproduces the experimental positron lifetime. The longer positron lifetime previously reported can be considered to arise from not only the bulk but also from the vacancy induced by impurities. In the case of cation vacancy, the calculated positron lifetime based on semiconductor model is shorter than the experimental value, which suggests that the inward relaxation occurs around the cation vacancy trapping the positron. (author)

  17. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter

    1999-01-01

    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  18. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  19. Estimates for the parameters of the heavy quark expansion

    Energy Technology Data Exchange (ETDEWEB)

    Heinonen, Johannes; Mannel, Thomas [Universitaet Siegen (Germany)

    2015-07-01

    We give improved estimates for the non-perturbative parameters appearing in the heavy quark expansion for inclusive decays. While the parameters appearing in low orders of this expansion can be extracted from data, the number of parameters in higher orders proliferates strongly, making a determination of these parameters from data impossible. Thus, one has to rely on theoretical estimates which may be obtained from an insertion of intermediate states. We refine this method and attempt to estimate the uncertainties of this approach.

  20. Theoretical stellar luminosity functions and globular cluster ages and compositions

    International Nuclear Information System (INIS)

    Ratcliff, S.J.

    1985-01-01

    The ages and chemical compositions of the stars in globular clusters are of great interest, particularly because age estimates from the well-known exercise of fitting observed color-magnitude diagrams to theoretical predictions tend to yield ages in excess of the Hubble time (an estimate to the age of the Universe) in standard cosmological models, for currently proposed high values of Hubble's constant (VandenBerg 1983). Relatively little use has been made of stellar luminosity functions of the globular clusters, for which reliable observations are now becoming available, to constrain the ages or compositions. The comparison of observed luminosity functions to theoretical ones allows one to take advantage of information not usually used, and has the advantage of being relatively insensitive to our lack of knowledge of the detailed structure of stellar envelopes and atmospheres. A computer program was developed to apply standard stellar evolutionary theory, using the most recently available input physics (opacities, nuclear reaction rates), to the calculation of the evolution of low-mass Population II stars. An algorithm for computing luminosity functions from the evolutionary tracks was applied to sets of tracks covering a broad range of chemical compositions and ages, such as may be expected for globular clusters

  1. Estimates of cost-effectiveness of prehospital continuous positive airway pressure in the management of acute pulmonary edema.

    Science.gov (United States)

    Hubble, Michael W; Richards, Michael E; Wilfong, Denise A

    2008-01-01

    To estimate the cost-effectiveness of continuous positive airway pressure (CPAP) in managing prehospital acute pulmonary edema in an urban EMS system. Using estimates from published reports on prehospital and emergency department CPAP, a cost-effectiveness model of implementing CPAP in a typical urban EMS system was derived from the societal perspective as well as the perspective of the implementing EMS system. To assess the robustness of the model, a series of univariate and multivariate sensitivity analyses was performed on the input variables. The cost of consumables, equipment, and training yielded a total cost of $89 per CPAP application. The theoretical system would be expected to use CPAP 4 times per 1000 EMS patients and is expected to save 0.75 additional lives per 1000 EMS patients at a cost of $490 per life saved. CPAP is also expected to result in approximately one less intubation per 6 CPAP applications and reduce hospitalization costs by $4075 per year for each CPAP application. Through sensitivity analyses the model was verified to be robust across a wide range of input variable assumptions. Previous studies have demonstrated the clinical effectiveness of CPAP in the management of acute pulmonary edema. Through a theoretical analysis which modeled the costs and clinical benefits of implementing CPAP in an urban EMS system, prehospital CPAP appears to be a cost-effective treatment.

  2. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  3. Estimating Collisionally-Induced Escape Rates of Light Neutrals from Early Mars

    Science.gov (United States)

    Gacesa, M.; Zahnle, K. J.

    2016-12-01

    Collisions of atmospheric gases with hot oxygen atoms constitute an important non-thermal mechanism of escape of light atomic and molecular species at Mars. In this study, we present revised theoretical estimates of non-thermal escape rates of neutral O, H, He, and H2 based on recent atmospheric density profiles obtained from the NASA Mars Atmosphere and Volatile Evolution (MAVEN) mission and related theoretical models. As primary sources of hot oxygen, we consider dissociative recombination of O2+ and CO2+ molecular ions. We also consider hot oxygen atoms energized in primary and secondary collisions with energetic neutral atoms (ENAs) produced in charge-exchange of solar wind H+ and He+ ions with atmospheric gases1,2. Scattering of hot oxygen and atmospheric species of interest is modeled using fully-quantum reactive scattering formalism3. This approach allows us to construct distributions of vibrationally and rotationally excited states and predict the products' emission spectra. In addition, we estimate formation rates of excited, translationally hot hydroxyl molecules in the upper atmosphere of Mars. The escape rates are calculated from the kinetic energy distributions of the reaction products using an enhanced 1D model of the atmosphere for a range of orbital and solar parameters. Finally, by considering different scenarios, we estimate the influence of these escape mechanisms on the evolution of Mars's atmosphere throughout previous epochs and their impact on the atmospheric D/H ratio. M.G.'s research was supported by an appointment to the NASA Postdoctoral Program at the NASA Ames Research Center, administered by Universities Space Research Association under contract with NASA. 1N. Lewkow and V. Kharchenko, "Precipitation of Energetic Neutral Atoms and Escape Fluxes induced from the Mars Atmosphere", Astroph. J., 790, 98 (2014) 2M. Gacesa, N. Lewkow, and V. Kharchenko, "Non-thermal production and escape of OH from the upper atmosphere of Mars", arXiv:1607

  4. A theoretical study of the structure and thermochemical properties of alkali metal fluoroplumbates MPbF3.

    Science.gov (United States)

    Boltalin, A I; Korenev, Yu M; Sipachev, V A

    2007-07-19

    Molecular constants of MPbF3 (M=Li, Na, K, Rb, and Cs) were calculated theoretically at the MP2(full) and B3LYP levels with the SDD (Pb, K, Rb, and Cs) and cc-aug-pVQZ (F, Li, and Na) basis sets to determine the thermochemical characteristics of the substances. Satisfactory agreement with experiment was obtained, including the unexpected nonmonotonic dependence of substance dissociation energies on the alkali metal atomic number. The bond lengths of the theoretical CsPbF3 model were substantially elongated compared with experimental estimates, likely because of errors in both theoretical calculations and electron diffraction data processing.

  5. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.

  6. Theoretical Estimation of Thermal Effects in Drilling of Woven Carbon Fiber Composite

    Directory of Open Access Journals (Sweden)

    José Díaz-Álvarez

    2014-06-01

    Full Text Available Carbon Fiber Reinforced Polymer (CFRPs composites are extensively used in structural applications due to their attractive properties. Although the components are usually made near net shape, machining processes are needed to achieve dimensional tolerance and assembly requirements. Drilling is a common operation required for further mechanical joining of the components. CFRPs are vulnerable to processing induced damage; mainly delamination, fiber pull-out, and thermal degradation, drilling induced defects being one of the main causes of component rejection during manufacturing processes. Despite the importance of analyzing thermal phenomena involved in the machining of composites, only few authors have focused their attention on this problem, most of them using an experimental approach. The temperature at the workpiece could affect surface quality of the component and its measurement during processing is difficult. The estimation of the amount of heat generated during drilling is important; however, numerical modeling of drilling processes involves a high computational cost. This paper presents a combined approach to thermal analysis of composite drilling, using both an analytical estimation of heat generated during drilling and numerical modeling for heat propagation. Promising results for indirect detection of risk of thermal damage, through the measurement of thrust force and cutting torque, are obtained.

  7. Angle of arrival estimation using spectral interferometry

    International Nuclear Information System (INIS)

    Barber, Z.W.; Harrington, C.; Thiel, C.W.; Babbitt, W.R.; Krishna Mohan, R.

    2010-01-01

    We have developed a correlative signal processing concept based on a Mach-Zehnder interferometer and spatial-spectral (S2) materials that enables direct mapping of RF spectral phase as well as power spectral recording. This configuration can be used for precise frequency resolved time delay estimation between signals received by a phased antenna array system that in turn could be utilized to estimate the angle of arrival. We present an analytical theoretical model and a proof-of-principle demonstration of the concept of time difference of arrival estimation with a cryogenically cooled Tm:YAG crystal that operates on microwave signals modulated onto a stabilized optical carrier at 793 nm.

  8. Angle of arrival estimation using spectral interferometry

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Z.W.; Harrington, C.; Thiel, C.W.; Babbitt, W.R. [Spectrum Lab, Montana State University, Bozeman, MT 59717 (United States); Krishna Mohan, R., E-mail: krishna@spectrum.montana.ed [Spectrum Lab, Montana State University, Bozeman, MT 59717 (United States)

    2010-09-15

    We have developed a correlative signal processing concept based on a Mach-Zehnder interferometer and spatial-spectral (S2) materials that enables direct mapping of RF spectral phase as well as power spectral recording. This configuration can be used for precise frequency resolved time delay estimation between signals received by a phased antenna array system that in turn could be utilized to estimate the angle of arrival. We present an analytical theoretical model and a proof-of-principle demonstration of the concept of time difference of arrival estimation with a cryogenically cooled Tm:YAG crystal that operates on microwave signals modulated onto a stabilized optical carrier at 793 nm.

  9. Systematic review of the risk of uterine rupture with the use of amnioinfusion after previous cesarean delivery.

    Science.gov (United States)

    Hicks, Paul

    2005-04-01

    Amnioinfusion is commonly used for the intrapartum treatment of women with pregnancy complicated by thick meconium or oligohydramnios with deep variable fetal heart rate decelerations. Its benefit in women with previous cesarean deliveries is less known. Theoretically, rapid increases in intrauterine volume would lead to a higher risk of uterine rupture. Searches of the Cochrane Library from inception to the third quarter of 2001 and MEDLINE, 1966 to November 2001, were performed by using keywords "cesarean" and "amnioinfusion." Search terms were expanded to maximize results. All languages were included. Review articles, editorials, and data previously published in other sites were not analyzed. Four studies were retrieved having unduplicated data describing amnioinfusion in women who were attempting a trial of labor after previous cesarean section. As the studies were of disparate types, meta-analysis was not possible. The use of amnioinfusion in women with previous cesarean delivery who are undergoing a trial of labor may be a safe procedure, but confirmatory large, controlled prospective studies are needed before definitive recommendations can be made.

  10. Monte Carlo simulation for the estimation of iron in human whole ...

    Indian Academy of Sciences (India)

    The simulation shows that theobtained results are in good agreement with experimental data, and better than the theoretical XCOM values. The study indicates that MCNP simulation is an excellent tool to estimate the iron concentration in the blood samples. The MCNP code can also be utilized to estimate other trace ...

  11. Theoretical nuclear physics

    CERN Document Server

    Blatt, John M

    1979-01-01

    A classic work by two leading physicists and scientific educators endures as an uncommonly clear and cogent investigation and correlation of key aspects of theoretical nuclear physics. It is probably the most widely adopted book on the subject. The authors approach the subject as ""the theoretical concepts, methods, and considerations which have been devised in order to interpret the experimental material and to advance our ability to predict and control nuclear phenomena.""The present volume does not pretend to cover all aspects of theoretical nuclear physics. Its coverage is restricted to

  12. Estimating cotton canopy ground cover from remotely sensed scene reflectance

    International Nuclear Information System (INIS)

    Maas, S.J.

    1998-01-01

    Many agricultural applications require spatially distributed information on growth-related crop characteristics that could be supplied through aircraft or satellite remote sensing. A study was conducted to develop and test a methodology for estimating plant canopy ground cover for cotton (Gossypium hirsutum L.) from scene reflectance. Previous studies indicated that a relatively simple relationship between ground cover and scene reflectance could be developed based on linear mixture modeling. Theoretical analysis indicated that the effects of shadows in the scene could be compensated for by averaging the results obtained using scene reflectance in the red and near-infrared wavelengths. The methodology was tested using field data collected over several years from cotton test plots in Texas and California. Results of the study appear to verify the utility of this approach. Since the methodology relies on information that can be obtained solely through remote sensing, it would be particularly useful in applications where other field information, such as plant size, row spacing, and row orientation, is unavailable

  13. Theoretical chemistry in Belgium a topical collection from theoretical chemistry accounts

    CERN Document Server

    Champagne, Benoît; De Proft, Frank; Leyssens, Tom

    2014-01-01

    Readers of this volume can take a tour around the research locations in Belgium which are active in theoretical and computational chemistry. Selected researchers from Belgium present research highlights of their work. Originally published in the journal Theoretical Chemistry Accounts, these outstanding contributions are now available in a hardcover print format. This volume will be of benefit in particular to those research groups and libraries that have chosen to have only electronic access to the journal. It also provides valuable content for all researchers in theoretical chemistry.

  14. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  15. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  16. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    Science.gov (United States)

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences

  17. Theoretical and practical significance of formal reasoning

    Science.gov (United States)

    Linn, Marcia C.

    Piaget's theory has profoundly influenced science education research. Following Piaget, researchers have focused on content-free strategies, developmentally based mechanisms, and structural models of each stage of reasoning. In practice, factors besides those considered in Piaget's theory influence whether or not a theoretically available strategy is used. Piaget's focus has minimized the research attention placed on what could be called practical factors in reasoning. Practical factors are factors that influence application of a theoretically available strategy, for example, previous experience with the task content, familiarity with task instructions, or personality style of the student. Piagetian theory has minimized the importance of practical factors and discouraged investigation of (1) the role of factual knowledge in reasoning, (2) the diagnosis of specific, task-based errors in reasoning, (3) the influence of individual aptitudes on reasoning (e.g., field dependence-independence), and (4) the effect of educational interventions designed to change reasoning. This article calls for new emphasis on practical factors in reasoning and suggests why research on practical factors in reasoning will enhance our understanding of how scientific reasoning is acquired and of how science education programs can foster it.

  18. Theoretical Model for Volume Fraction of UC, 235U Enrichment, and Effective Density of Final U 10Mo Alloy

    Energy Technology Data Exchange (ETDEWEB)

    Devaraj, Arun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab. (EMSL); Prabhakaran, Ramprashad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab. (EMSL); Joshi, Vineet V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab. (EMSL); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab. (EMSL); McGarrah, Eric J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab. (EMSL); Lavender, Curt A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab. (EMSL)

    2016-04-12

    The purpose of this document is to provide a theoretical framework for (1) estimating uranium carbide (UC) volume fraction in a final alloy of uranium with 10 weight percent molybdenum (U-10Mo) as a function of final alloy carbon concentration, and (2) estimating effective 235U enrichment in the U-10Mo matrix after accounting for loss of 235U in forming UC. This report will also serve as a theoretical baseline for effective density of as-cast low-enriched U-10Mo alloy. Therefore, this report will serve as the baseline for quality control of final alloy carbon content

  19. Theoretical study of rock mass investigation efficiency

    International Nuclear Information System (INIS)

    Holmen, Johan G.; Outters, Nils

    2002-05-01

    The study concerns a mathematical modelling of a fractured rock mass and its investigations by use of theoretical boreholes and rock surfaces, with the purpose of analysing the efficiency (precision) of such investigations and determine the amount of investigations necessary to obtain reliable estimations of the structural-geological parameters of the studied rock mass. The study is not about estimating suitable sample sizes to be used in site investigations.The purpose of the study is to analyse the amount of information necessary for deriving estimates of the geological parameters studied, within defined confidence intervals and confidence level In other words, how the confidence in models of the rock mass (considering a selected number of parameters) will change with amount of information collected form boreholes and surfaces. The study is limited to a selected number of geometrical structural-geological parameters: Fracture orientation: mean direction and dispersion (Fisher Kappa and SRI). Different measures of fracture density (P10, P21 and P32). Fracture trace-length and strike distributions as seen on horizontal windows. A numerical Discrete Fracture Network (DFN) was used for representation of a fractured rock mass. The DFN-model was primarily based on the properties of an actual fracture network investigated at the Aespoe Hard Rock Laboratory. The rock mass studied (DFN-model) contained three different fracture sets with different orientations and fracture densities. The rock unit studied was statistically homogeneous. The study includes a limited sensitivity analysis of the properties of the DFN-model. The study is a theoretical and computer-based comparison between samples of fracture properties of a theoretical rock unit and the known true properties of the same unit. The samples are derived from numerically generated boreholes and surfaces that intersect the DFN-network. Two different boreholes are analysed; a vertical borehole and a borehole that is

  20. Theoretical and Experimental Investigation of Force Estimation Errors Using Active Magnetic Bearings with Embedded Hall Sensors

    DEFF Research Database (Denmark)

    Voigt, Andreas Jauernik; Santos, Ilmar

    2012-01-01

    to ∼ 20% of the nominal air gap the force estimation error is found to be reduced by the linearized force equation as compared to the quadratic force equation, which is supported by experimental results. Additionally the FE model is employed in a comparative study of the force estimation error behavior...... of AMBs by embedding Hall sensors instead of mounting these directly on the pole surfaces, force estimation errors are investigated both numerically and experimentally. A linearized version of the conventionally applied quadratic correspondence between measured Hall voltage and applied AMB force...

  1. Experimental Verification of a Global Exponentially Stable Nonlinear Wave Encounter Frequency Estimator

    DEFF Research Database (Denmark)

    Belleter, Dennis J.W.; Galeazzi, Roberto; Fossen, Thor Inge

    2015-01-01

    towing tank experiments using a container ship scale model. The estimates for both regular and irregular waves confirm the results. Finally, the estimator is applied to full-scale data gathered from a container ship operating in the Atlantic Ocean during a storm. Again the theoretical results...

  2. Analysis of the maximum likelihood channel estimator for OFDM systems in the presence of unknown interference

    Science.gov (United States)

    Dermoune, Azzouz; Simon, Eric Pierre

    2017-12-01

    This paper is a theoretical analysis of the maximum likelihood (ML) channel estimator for orthogonal frequency-division multiplexing (OFDM) systems in the presence of unknown interference. The following theoretical results are presented. Firstly, the uniqueness of the ML solution for practical applications, i.e., when thermal noise is present, is analytically demonstrated when the number of transmitted OFDM symbols is strictly greater than one. The ML solution is then derived from the iterative conditional ML (CML) algorithm. Secondly, it is shown that the channel estimate can be described as an algebraic function whose inputs are the initial value and the means and variances of the received samples. Thirdly, it is theoretically demonstrated that the channel estimator is not biased. The second and the third results are obtained by employing oblique projection theory. Furthermore, these results are confirmed by numerical results.

  3. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  4. Theoretical epidemiology applied to health physics: estimation of the risk of radiation-induced breast cancer

    International Nuclear Information System (INIS)

    Sutherland, J.V.

    1983-01-01

    Indirect estimation of low-dose radiation hazards is possible using the multihit model of carcinogenesis. This model is based on cancer incidence data collected over many decades on tens of millions of people. Available data on human radiation effects can be introduced into the modeling process without the requirement that these data precisely define the model to be used. This reduction in the information demanded from the limited data on human radiation effects allows a more rational approach to estimation of low-dose radiation hazards and helps to focus attention on research directed towards understanding the process of carcinogenesis, rather than on repeating human or animal experiments that cannot provide sufficient data to resolve the low-dose estimation problem. Assessment of the risk of radiation-induced breast cancer provides an excellent example of the utility of multihit modeling procedures

  5. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  6. Improved theoretical model of InN optical properties

    International Nuclear Information System (INIS)

    Ferreira da Silva, A.; Chubaci, J.F.D.; Matsuoka, M.; Freitas, J.A. Jr.; Tischler, J.G.; Baldissera, G.; Persson, C.

    2014-01-01

    The optical properties of InN are investigated theoretically by employing the projector augmented wave (PAW) method within Green's function and the screened Coulomb interaction approximation (GW o ). The calculated results are compared to previously reported calculations which use local density approximation combined with the scissors-operator approximation. The results of the present calculation are compared with reported values of the InN bandgap and with low temperature near infrared luminescence measurements of InN films deposited by a modified Ion Beam Assisted Deposition technique. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  7. Patient Autonomy in a High-Tech Care Context - A Theoretical Framework.

    Science.gov (United States)

    Lindberg, Catharina; Fagerström, Cecilia; Willman, Ania

    2018-06-12

    To synthesise and interpret previous findings with the aim of developing a theoretical framework for patient autonomy in a high-tech care context. Putting the somewhat abstract concept of patient autonomy into practice can prove difficult since when it is highlighted in healthcare literature the patient perspective is often invisible. Autonomy presumes that a person has experience, education, self-discipline and decision-making capacity. Reference to autonomy in relation to patients in high-tech care environments could therefore be considered paradoxical, as in most cases these persons are vulnerable, with impaired physical and/or metacognitive capacity, thus making extended knowledge of patient autonomy for these persons even more important. Theory development. The basic approaches in theory development by Walker and Avant were used to create a theoretical framework through an amalgamation of the results from three qualitative studies conducted previously by the same research group. A theoretical framework - the control-partnership-transition framework - was delineated disclosing different parts co-creating the prerequisites for patient autonomy in high-tech care environments. Assumptions and propositional statements that guide theory development were also outlined, as were guiding principles for use in day-to-day nursing care. Four strategies used by patients were revealed: the strategy of control, the strategy of partnership, the strategy of trust, and the strategy of transition. An extended knowledge base, founded on theoretical reasoning about patient autonomy, could facilitate nursing care that would allow people to remain/become autonomous in the role of patient in high-tech care environments. The control-partnership-transition framework would be of help in supporting and defending patient autonomy when caring for individual patients, as it provides an understanding of the strategies employed by patients to achieve autonomy in high-tech care contexts. The

  8. Theoretical aspects of an electrostatic aerosol filter for civilian turbofan engines

    Directory of Open Access Journals (Sweden)

    Valeriu DRAGAN

    2012-03-01

    Full Text Available The paper addresses the problem of aerosol filtration in turbofan engines. The current problem of very fine aerosol admission is the impossibility for mechanical filtration; another aspect of the problem is the high mass flow of air to be filtered. Non-attended, the aerosol admission can -and usually does- lead to clogging of turbine cooling passages and can damage the engine completely. The approach is theoretical and relies on the principles of electrostatic dust collectors known in other industries. An estimative equation is deduced in order to quantify the electrical charge required to obtain the desired filtration. Although the device still needs more theoretical and experimental work, it could one day be used as a means of increasing the safety of airplanes passing trough an aerosol laden mass of air.

  9. The Theoretical and Empirical Approaches to the Definition of Audit Risk

    Directory of Open Access Journals (Sweden)

    Berezhniy Yevgeniy B.

    2017-12-01

    Full Text Available The risk category is one of the key factors in planning the audit and assessing its results. The article is aimed at generalizing the theoretical and empirical approaches to the definition of audit risk and methods of its reduction. The structure of audit risk was analyzed and it has been determined, that each of researchers approached to structuring of audit risk from the subjective point of view. The author’s own model of audit risk has been proposed. The basic methods of assessment of audit risk are generalized, the theoretical and empirical approaches to its definition are allocated, also it is noted, that application of any of the given models can be suitable rather for approximate estimation, than for exact calculation of an audit risk, as it is accompanied by certain shortcomings.

  10. An assessment of some theoretical models used for the calculation of the refractive index of InXGa1-xAs

    Science.gov (United States)

    Engelbrecht, J. A. A.

    2018-04-01

    Theoretical models used for the determination of the refractive index of InXGa1-XAs are reviewed and compared. Attention is drawn to some problems experienced with some of the models. Models also extended to the mid-infrared region of the electromagnetic spectrum. Theoretical results in the mid-infrared region are then compared to previously published experimental results.

  11. A comparison of theoretical and solar-flare intensity ratios for the Fe XIX X-ray lines

    International Nuclear Information System (INIS)

    Bhatia, A.K.; Mason, H.E.; Fawcett, B.C.; Phillips, K.J.H.

    1989-04-01

    Atomic data consisting of energy levels, g f-values and wavelengths are presented for the Fe XIX 2s 2 2p 4 -2s 2 2p 3 3s, 2s 2 2p 3 3d arrays that give rise to lines in solar flare and active-region X-ray spectra. Collision strengths and theoretical intensity ratios are given for the 2s 2 2p 4 -2s 2 2p 3 3d lines, which occur in the 13.2-14.3 A range. Solar spectra in this range include a large number of other intense lines, notably those due to He-like Ne (Ne IX). Although the Ne IX lines are potentially the most useful indicators of electron density in solar X-ray spectra, blending with the Fe XIX lines has been a major problem for previous analyses. Comparison of observed spectra with those calculated from the Fe XIX atomic data presented here and Ne IX lines from other work indicates that there is generally good agreement. We use the calculated Fe XIX and Ne IX line spectra and several observed spectra during a flare previously analysed to estimate electron density from Ne IX line ratios, thus for the first time properly taking into account blends with Fe XIX lines. (author)

  12. Uncertainty Assessment for Theoretical Atomic and Molecular Scattering Data. Summary Report of a Joint IAEA-ITAMP Technical Meeting

    International Nuclear Information System (INIS)

    Chung, Hyun-Kyung; Bartschat, Klaus; Tennyson, Jonathan; Schultz, David R.

    2014-10-01

    This report summarizes the proceedings of the Joint IAEA-ITAMP Technical Meeting on “Uncertainty Assessment for Theoretical Atomic and Molecular Scattering Data” on 7-9 July 2014. Twenty-five participants from ten Member States and one from the IAEA attended the three-day meeting held at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, USA and hosted by the Institute of Theoretical Atomic, Molecular and Optical Physics (ITAMP). The report includes discussions on the issues of uncertainty estimates for theoretical atomic and molecular scattering data. The abstracts of presentations presented in the meeting are attached in the Appendix. (author)

  13. Ultrasensitive Detection of Infrared Photon Using Microcantilever: Theoretical Analysis

    International Nuclear Information System (INIS)

    Li-Xin, Cao; Feng-Xin, Zhang; Yin-Fang, Zhu; Jin-Ling, Yang

    2010-01-01

    We present a new method for detecting near-infrared, mid-infrared, and far-infrared photons with an ultrahigh sensitivity. The infrared photon detection was carried out by monitoring the displacement change of a vibrating microcantilever under light pressure using a laser Doppler vibrometer. Ultrathin silicon cantilevers with high sensitivity were produced using micro/nano-fabrication technology. The photon detection system was set up. The response of the microcantilever to the photon illumination is theoretically estimated, and a nanowatt resolution for the infrared photon detection is expected at room temperature with this method

  14. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis

    International Nuclear Information System (INIS)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2015-01-01

    The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to provide better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability

  15. Simple theoretical models for composite rotor blades

    Science.gov (United States)

    Valisetty, R. R.; Rehfield, L. W.

    1984-01-01

    The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.

  16. Comparison of theoretical estimates and experimental measurements of fatigue crack growth under severe thermal shock conditions (part two - theoretical assessment and comparison with experiment)

    International Nuclear Information System (INIS)

    Green, D.; Marsh, D.; Parker, R.

    1984-01-01

    This paper reports the theoretical assessment of cracking which may occur when a severe cycle comprising alternate upshocks and downshocks is applied to an axisymmetric feature with an internal, partial penetration weld and crevice. The experimental observations of cracking are reported separately. A good agreement was noted even though extensive cycle plasticity occurred at the location of cracking. It is concluded that the LEFM solution has correlated with the experiment mainly because of the axisymmetric geometry which allows a large hydrostatic stress to exist at the internal weld crevice end. Thus the stress at the crevice can approach the singular solution required for LEFM correlations without contributing to yielding

  17. Theoretical physics 8 statistical physics

    CERN Document Server

    Nolting, Wolfgang

    2018-01-01

    This textbook offers a clear and comprehensive introduction to statistical physics, one of the core components of advanced undergraduate physics courses. It follows on naturally from the previous volumes in this series, using methods of probability theory and statistics to solve physical problems. The first part of the book gives a detailed overview on classical statistical physics and introduces all mathematical tools needed. The second part of the book covers topics related to quantized states, gives a thorough introduction to quantum statistics, followed by a concise treatment of quantum gases. Ideally suited to undergraduate students with some grounding in quantum mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successf...

  18. Theoretical implications for the estimation of dinitrogen fixation by large perennial plant species using isotope dilution

    Science.gov (United States)

    Dwight D. Baker; Maurice Fried; John A. Parrotta

    1995-01-01

    Estimation of symbiotic N2 fixation associated with large perennial plant species, especially trees, poses special problems because the process must be followed over a potentially long period of time to integrate the total amount of fixation. Estimations using isotope dilution methodology have begun to be used for trees in field studies. Because...

  19. Discretization of Lévy semistationary processes with application to estimation

    DEFF Research Database (Denmark)

    Bennedsen, Mikkel; Lunde, Asger; Pakkanen, Mikko

    Motivated by the construction of the Ito stochastic integral, we consider a step function method to discretize and simulate volatility modulated Lévy semistationary processes. Moreover, we assess the accuracy of the method with a particular focus on integrating kernels with a singularity...... at the origin. Using the simulation method, we study the finite sample properties of some recently developed estimators of realized volatility and associated parametric estimators for Brownian semistationary processes. Although the theoretical properties of these estimators have been established under high...

  20. Estimating the burden of disease attributable to high cholesterol in ...

    African Journals Online (AJOL)

    Monte Carlo simulation-modelling techniques were used for uncertainty .... risk factor is estimated by comparing current local health status with a theoretical .... Normal probability distributions were specified around the mean TC levels by age, ...

  1. Discrete Choice Experiments: A Guide to Model Specification, Estimation and Software.

    Science.gov (United States)

    Lancsar, Emily; Fiebig, Denzil G; Hole, Arne Risa

    2017-07-01

    We provide a user guide on the analysis of data (including best-worst and best-best data) generated from discrete-choice experiments (DCEs), comprising a theoretical review of the main choice models followed by practical advice on estimation and post-estimation. We also provide a review of standard software. In providing this guide, we endeavour to not only provide guidance on choice modelling but to do so in a way that provides a 'way in' for researchers to the practicalities of data analysis. We argue that choice of modelling approach depends on the research questions, study design and constraints in terms of quality/quantity of data and that decisions made in relation to analysis of choice data are often interdependent rather than sequential. Given the core theory and estimation of choice models is common across settings, we expect the theoretical and practical content of this paper to be useful to researchers not only within but also beyond health economics.

  2. Towards A Theoretical Biology: Reminiscences

    Indian Academy of Sciences (India)

    engaged in since the start of my career at the University of Chicago. Theoretical biology was ... research on theoretical problems in biology. Waddington, an ... aimed at stimulating the development of such a theoretical biology. The role the ...

  3. Nonlinear estimation and control of automotive drivetrains

    CERN Document Server

    Chen, Hong

    2014-01-01

    Nonlinear Estimation and Control of Automotive Drivetrains discusses the control problems involved in automotive drivetrains, particularly in hydraulic Automatic Transmission (AT), Dual Clutch Transmission (DCT) and Automated Manual Transmission (AMT). Challenging estimation and control problems, such as driveline torque estimation and gear shift control, are addressed by applying the latest nonlinear control theories, including constructive nonlinear control (Backstepping, Input-to-State Stable) and Model Predictive Control (MPC). The estimation and control performance is improved while the calibration effort is reduced significantly. The book presents many detailed examples of design processes and thus enables the readers to understand how to successfully combine purely theoretical methodologies with actual applications in vehicles. The book is intended for researchers, PhD students, control engineers and automotive engineers. Hong Chen is a professor at the State Key Laboratory of Automotive Simulation and...

  4. Theoretical estimation of absorbed dose to organs in radioimmunotherapy using radionuclides with multiple unstable daughters

    International Nuclear Information System (INIS)

    Hamacher, K.A.; Sgouros, G.

    2001-01-01

    The toxicity and clinical utility of long-lived alpha emitters such as Ac-225 and Ra-223 will depend upon the fate of alpha-particle emitting unstable intermediates generated after decay of the conjugated parent. For example, decay of Ac-225 to a stable element yields four alpha particles and seven radionuclides. Each of these progeny has its own free-state biodistribution and characteristic half-life. Therefore, their inclusion for a more accurate prediction of absorbed dose and potential toxicity requires a formalism that takes these factors into consideration as well. To facilitate the incorporation of such intermediates into the dose calculation, a previously developed methodology (model 1) has been extended. Two new models (models 2 and 3) for allocation of daughter products are introduced and are compared with the previously developed model. Model 1 restricts the transport to a function that yields either the place of origin or the place(s) of biodistribution depending on the half-life of the parent radionuclide. Model 2 includes the transient time within the bloodstream and model 3 incorporates additional binding at or within the tumor. This means that model 2 also allows for radionuclide decay and further daughter production while moving from one location to the next and that model 3 relaxes the constraint that the residence time within the tumor is solely based on the half-life of the parent. The models are used to estimate normal organ absorbed doses for the following parent radionuclides: Ac-225, Pb-212, At-211, Ra-223, and Bi-213. Model simulations are for a 0.1 g rapidly accessible tumor and a 10 g solid tumor. Additionally, the effects of varying radiolabled carrier molecule purity and amount of carrier molecules, as well as tumor cell antigen saturation are examined. The results indicate that there is a distinct advantage in using parent radionuclides such as Ac-225 or Ra-223, each having a half-life of more than 10 days and yielding four alpha

  5. Estimation of state and material properties during heat-curing molding of composite materials using data assimilation: A numerical study

    Directory of Open Access Journals (Sweden)

    Ryosuke Matsuzaki

    2018-03-01

    Full Text Available Accurate simulations of carbon fiber-reinforced plastic (CFRP molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult. Keywords: Engineering, Materials science, Applied mathematics

  6. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  7. Theoretical model estimation of guest diffusion in Metal-Organic Frameworks (MOFs)

    KAUST Repository

    Zheng, Bin

    2015-08-11

    Characterizing molecule diffusion in nanoporous matrices is critical to understanding the novel chemical and physical properties of metal-organic frameworks (MOFs). In this paper, we developed a theoretical model to fastly and accurately compute the diffusion rate of guest molecules in a zeolitic imidazolate framework-8 (ZIF-8). The ideal gas or equilibrium solution diffusion model was modified to contain the effect of periodical media via introducing the possibility of guests passing through the framework gate. The only input in our model is the energy barrier of guests passing through the MOF’s gate. Molecular dynamics (MD) methods were employed to gather the guest density profile, which then was used to deduce the energy barrier values. This produced reliable results that require a simulation time of 5 picoseconds, which is much shorter when using pure MD methods (in the billisecond scale) . Also, we used density functional theory (DFT) methods to obtain the energy profile of guests passing through gates, as this does not require specification of a force field for the MOF degrees of freedom. In the DFT calculation, we only considered one gate of MOFs each time; as this greatly reduced the computational cost. Based on the obtained energy barrier values we computed the diffusion rate of alkane and alcohol in ZIF-8 using our model, which was in good agreement with experimental test results and the calculation values from standard MD model. Our model shows the advantage of obtaining accurate diffusion rates for guests in MOFs for a lower computational cost and shorter calculation time. Thus, our analytic model calculation is especially attractive for high-throughput computational screening of the dynamic performance of guests in a framework.

  8. Experimental and theoretical investigation of an evaporative fuel system for heat engines

    International Nuclear Information System (INIS)

    Thern, Marcus; Lindquist, Torbjoern; Torisson, Tord

    2007-01-01

    The evaporative gas turbine (EvGT) pilot plant has been in operation at Lund University in Sweden since 1997. This project has led to improved knowledge of evaporative techniques and the concept of introducing fuel into gas turbines by evaporation. This results in, amongst others, power augmentation, efficiency increase and lower emissions. This article presents the experimental and theoretical results of the evaporation of a mixture of ethanol and water into an air stream at elevated pressures and temperatures. A theoretical model has been established for the simultaneous heat and mass transfer occurring in the ethanol humidification tower. The theoretical model has been validated through experiments at several operating conditions. It has been shown that the air, water and ethanol can be calculated throughout the column in a satisfactory way. The height of the column can be estimated within an error of 15% compared with measurements. The results from the model are most sensitive to the properties of diffusion coefficient, viscosity, thermal conductivity and activity coefficient due to the complexity of the polar gas mixture of water and air

  9. Experimental and Theoretical Analysis of Headlight Surface Temperature in an Infrared Heated Stress Relieving Oven

    Directory of Open Access Journals (Sweden)

    Mustafa MUTLU

    2016-04-01

    Full Text Available In this study, the IR heated stress relieve oven was experimentally and theoretically examined. In experimental measurements, temperature was measured on headlight surface, placed in IR oven at various conveyor speeds and various distances between IR lamps and headlight surface. In theoretical study, a mathematical model was developed for the headlights surface temperature by using heat transfer theory. The results obtained by the mathematical model and the measurement showed very good agreement with a 6.5 % average error. It is shown that mathematical models can be used to estimate the surface temperatures when the oven is operated under different conditions.

  10. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    Directory of Open Access Journals (Sweden)

    Giancarmine Fasano

    2013-09-01

    Full Text Available An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  11. Theoretical Physics Division

    International Nuclear Information System (INIS)

    This report is a survey of the studies done in the Theoretical Physics Division of the Nuclear Physics Institute; the subjects studied in theoretical nuclear physics were the few-nucleon problem, nuclear structure, nuclear reactions, weak interactions, intermediate energy and high energy physics. In this last field, the subjects studied were field theory, group theory, symmetry and strong interactions [fr

  12. External cephalic version among women with a previous cesarean delivery: report on 36 cases and review of the literature.

    Science.gov (United States)

    Abenhaim, Haim A; Varin, Jocelyne; Boucher, Marc

    2009-01-01

    Whether or not women with a previous cesarean section should be considered for an external cephalic version remains unclear. In our study, we sought to examine the relationship between a history of previous cesarean section and outcomes of external cephalic version for pregnancies at 36 completed weeks of gestation or more. Data on obstetrical history and on external cephalic version outcomes was obtained from the C.H.U. Sainte-Justine External Cephalic Version Database. Baseline clinical characteristics were compared among women with and without a history of previous cesarean section. We used logistic regression analysis to evaluate the effect of previous cesarean section on success of external cephalic version while adjusting for parity, maternal body mass index, gestational age, estimated fetal weight, and amniotic fluid index. Over a 15-year period, 1425 external cephalic versions were attempted of which 36 (2.5%) were performed on women with a previous cesarean section. Although women with a history of previous cesarean section were more likely to be older and para >2 (38.93% vs. 15.0%), there were no difference in gestational age, estimated fetal weight, and amniotic fluid index. Women with a prior cesarean section had a success rate similar to women without [50.0% vs. 51.6%, adjusted OR: 1.31 (0.48-3.59)]. Women with a previous cesarean section who undergo an external cephalic version have similar success rates than do women without. Concern about procedural success in women with a previous cesarean section is unwarranted and should not deter attempting an external cephalic version.

  13. Self-learning estimation of quantum states

    International Nuclear Information System (INIS)

    Hannemann, Th.; Reiss, D.; Balzer, Ch.; Neuhauser, W.; Toschek, P.E.; Wunderlich, Ch.

    2002-01-01

    We report the experimental estimation of arbitrary qubit states using a succession of N measurements on individual qubits, where the measurement basis is changed during the estimation procedure conditioned on the outcome of previous measurements (self-learning estimation). Two hyperfine states of a single trapped 171 Yb + ion serve as a qubit. It is demonstrated that the difference in fidelity between this adaptive strategy and passive strategies increases in the presence of decoherence

  14. Linear Estimation of Standard Deviation of Logistic Distribution ...

    African Journals Online (AJOL)

    The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...

  15. Information-theoretic decomposition of embodied and situated systems.

    Science.gov (United States)

    Da Rold, Federico

    2018-07-01

    The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. EEC-sponsored theoretical studies of gas cloud explosion pressure loadings

    International Nuclear Information System (INIS)

    Briscoe, F.; Curtress, N.; Farmer, C.L.; Fogg, G.J.; Vaughan, G.J.

    1979-01-01

    Estimates of the pressure loadings produced by unconfined gas cloud explosions on the surface of structures are required to assist the design of strong secondary containments in countries where the protection of nuclear installations against these events is considered to be necessary. At the present time, one difficulty in the specification of occurate pressure loadings arises from our lack of knowledge concerning the interaction between the incident pressure waves produced by unconfined gas cloud explosions and large structures. Preliminary theoretical studies include (i) general theoretical considerations, especially with regard to scaling (ii) investigations of the deflagration wave interaction with a wall based on an analytic solution for situations with planar symmetry and the application of an SRD gas cloud explosion code (GASEX 1) for situations with planar and spherical symmetry, and (iii) investigations of the interaction between shock waves and structures for situations with two-dimensional symmetry based on the application of another SRD gas cloud explosion code (GASEX 2)

  17. Theoretical and experimental study of fenofibrate and simvastatin

    Science.gov (United States)

    Nicolás Vázquez, Inés; Rodríguez-Núñez, Jesús Rubén; Peña-Caballero, Vicente; Ruvalcaba, Rene Miranda; Aceves-Hernandez, Juan Manuel

    2017-12-01

    Fenofibrate, an oral fibrate lipid lowering agent, and simvastatin, which reduces plasma levels of low-density lipoprotein cholesterol, are active pharmaceutical ingredients (APIs), currently in the market. We characterized these APIs by thermal analysis and conducted X-ray powder diffraction techniques. Studies should be carried out in the formulation stage before the final composition of a polypill may be established. Thus, it was found in thermochemical studies that both compounds present no chemical interactions in an equimolar mixture of solid samples at room temperature. Theoretical studies were employed to determine possible interactions between fenofibrate and simvastatin. A very weak intramolecular hydrogen bond is formed between the hydroxyl group (O5H5) of the simvastatin with chlorine and carbonyl group (C11O4, C1O2) of the fenofibrate molecule. These weak energy hydrogen bonds have no effect on the chemical stability of the compounds studied. The results were obtained using Density Functional Theory methods; particularly the BPE1BPE and B3LYP functional and 6-31++G** basis set. The values of energy show good approximation when are compared with similar calculations previously reported. Infrared spectra of monomers and dimers were obtained via theoretical calculations.

  18. Theoretical expectations for the muon's electric dipole moment

    International Nuclear Information System (INIS)

    Feng, Jonathan L.; Matchev, Konstantin T.; Shadmi, Yael

    2001-01-01

    We examine the muon's electric dipole moment d μ from a variety of theoretical perspectives. We point out that the reported deviation in the muon's g-2 can be due partially or even entirely to a new physics contribution to the muon's electric dipole moment. In fact, the recent g-2 measurement provides the most stringent bound on d μ to date. This ambiguity could be definitively resolved by the dedicated search for d μ recently proposed. We then consider both model-independent and supersymmetric frameworks. Under the assumptions of scalar degeneracy, proportionality, and flavor conservation, the theoretical expectations for d μ in supersymmetry fall just below the proposed sensitivity. However, nondegeneracy can give an order of magnitude enhancement, and lepton flavor violation can lead to d μ ∼10 -22 e cm, two orders of magnitude above the sensitivity of the d μ experiment. We present compact expressions for leptonic dipole moments and lepton flavor violating amplitudes. We also derive new limits on the amount of flavor violation allowed and demonstrate that approximations previously used to obtain such limits are highly inaccurate in much of parameter space

  19. Theoretical physics 1 classical mechanics

    CERN Document Server

    Nolting, Wolfgang

    2016-01-01

    This textbook offers a clear and comprehensive introduction to classical mechanics, one of the core components of undergraduate physics courses. The book starts with a thorough introduction to the mathematical tools needed, to make this textbook self-contained for learning. The second part of the book introduces the mechanics of the free mass point and details conservation principles. The third part expands the previous to mechanics of many particle systems. Finally the mechanics of the rigid body is illustrated with rotational forces, inertia and gyroscope movement. Ideally suited to undergraduate students in their first year, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this series...

  20. Estimation of biochemical variables using quantumbehaved particle ...

    African Journals Online (AJOL)

    To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the ...

  1. New Theoretical Estimates of the Contribution of Unresolved Star-Forming Galaxies to the Extragalactic Gamma-Ray Background (EGB) as Measured by EGRET and the Fermi-LAT

    Science.gov (United States)

    Venters, Tonia M.

    2011-01-01

    We present new theoretical estimates of the contribution of unresolved star-forming galaxies to the extragalactic gamma-ray background (EGB) as measured by EGRET and the Fermi-LAT. We employ several methods for determining the star-forming galaxy contribution the the EGB, including a method positing a correlation between the gamma-ray luminosity of a galaxy and its rate of star formation as calculated from the total infrared luminosity, and a method that makes use of a model of the evolution of the galaxy gas mass with cosmic time. We find that depending on the model, unresolved star-forming galaxies could contribute significantly to the EGB as measured by the Fermi-LAT at energies between approx. 300 MeV and approx. few GeV. However, the overall spectrum of unresolved star-forming galaxies can explain neither the EGRET EGB spectrum at energies between 50 and 200 MeV nor the Fermi-LAT EGB spectrum at energies above approx. few GeV.

  2. Differences between previously married and never married 'gay' men: family background, childhood experiences and current attitudes.

    Science.gov (United States)

    Higgins, Daryl J

    2004-01-01

    Despite a large body of literature on the development of sexual orientation, little is known about why some gay men have been (or remain) married to a woman. In the current study, a self-selected sample of 43 never married gay men ('never married') and 26 gay men who were married to a woman ('previously married') completed a self-report questionnaire. Hypotheses were based on five possible explanations for gay men's marriages: (a) differences in sexual orientation (i.e., bisexuality); (b) internalized homophobia; (c) religious intolerance; (d) confusion created because of childhood/adolescent sexual experiences; and/or (e) poor psychological adjustment. Previously married described their families' religious beliefs as more fundamentalist than never married. No differences were found between married' and never married' ratings of their sexual orientation and identity, and levels of homophobia and self-depreciation. Family adaptability and family cohesion and the degree to which respondents reported having experienced child maltreatment did not distinguish between previously married and never married. The results highlight how little is understood of the reasons why gay men marry, and the need to develop an adequate theoretical model.

  3. Accounting for subgroup structure in line-transect abundance estimates of false killer whales (Pseudorca crassidens in Hawaiian waters.

    Directory of Open Access Journals (Sweden)

    Amanda L Bradford

    Full Text Available For biological populations that form aggregations (or clusters of individuals, cluster size is an important parameter in line-transect abundance estimation and should be accurately measured. Cluster size in cetaceans has traditionally been represented as the total number of individuals in a group, but group size may be underestimated if group members are spatially diffuse. Groups of false killer whales (Pseudorca crassidens can comprise numerous subgroups that are dispersed over tens of kilometers, leading to a spatial mismatch between a detected group and the theoretical framework of line-transect analysis. Three stocks of false killer whales are found within the U.S. Exclusive Economic Zone of the Hawaiian Islands (Hawaiian EEZ: an insular main Hawaiian Islands stock, a pelagic stock, and a Northwestern Hawaiian Islands (NWHI stock. A ship-based line-transect survey of the Hawaiian EEZ was conducted in the summer and fall of 2010, resulting in six systematic-effort visual sightings of pelagic (n = 5 and NWHI (n = 1 false killer whale groups. The maximum number and spatial extent of subgroups per sighting was 18 subgroups and 35 km, respectively. These sightings were combined with data from similar previous surveys and analyzed within the conventional line-transect estimation framework. The detection function, mean cluster size, and encounter rate were estimated separately to appropriately incorporate data collected using different methods. Unlike previous line-transect analyses of cetaceans, subgroups were treated as the analytical cluster instead of groups because subgroups better conform to the specifications of line-transect theory. Bootstrap values (n = 5,000 of the line-transect parameters were randomly combined to estimate the variance of stock-specific abundance estimates. Hawai'i pelagic and NWHI false killer whales were estimated to number 1,552 (CV = 0.66; 95% CI = 479-5,030 and 552 (CV = 1.09; 95% CI = 97

  4. Spectral estimates for Dirichlet Laplacians on perturbed twisted tubes

    Czech Academy of Sciences Publication Activity Database

    Exner, Pavel; Barseghyan, Diana

    2014-01-01

    Roč. 8, č. 1 (2014), s. 167-183 ISSN 1846-3886 R&D Projects: GA ČR GAP203/11/0701 Institutional support: RVO:61389005 Keywords : Drichlet Laplacian * twisted tube * discrete spectrum * eigenvalue estimates Subject RIV: BE - Theoretical Physics Impact factor: 0.583, year: 2014

  5. On the stability of estimation of AR(1) coeffcient in the presence of ...

    African Journals Online (AJOL)

    The aim of our paper is to present an exhaustive study of the estimation of first order autoregressive models with exponential white noise under innovation contamination. Some theoretical aspects and Monte Carlo results are presented in the study of the stability of this estimator when the model is contaminated. Using the ...

  6. Theoretical stability in coefficient inverse problems for general hyperbolic equations with numerical reconstruction

    Science.gov (United States)

    Yu, Jie; Liu, Yikan; Yamamoto, Masahiro

    2018-04-01

    In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.

  7. The history of the UV radiation climate of the earth--theoretical and space-based observations.

    Science.gov (United States)

    Cockell, C S; Horneck, G

    2001-04-01

    In the Archean era (3.8-2.5 Ga ago) the Earth probably lacked a protective ozone column. Using data obtained in the Earth's orbit on the inactivation of Bacillus subtilis spores we quantitatively estimate the potential biological effects of such an environment. We combine this practical data with theoretical calculations to propose a history of the potential UV stress on the surface of the Earth over time. The data suggest that an effective ozone column was established at a pO2 of approximately 5 x 10(-3) present atmospheric level. The improvement in the UV environment on the early Proterozoic Earth might have been a much more rapid event than has previously been supposed, with DNA damage rates dropping by two orders of magnitude in the space of just a few tens of millions of years. We postulate that a coupling between reduced UV stress and increased pO2 production could have contributed toward a positive feedback in the production of ozone in the early Proterozoic atmosphere. This would contribute to the apparent rapidity of the oxidation event. The data provide an evolutionary perspective on present-day Antarctic ozone depletion.

  8. Hospital employees' theoretical knowledge on what to do in an in-hospital cardiac arrest

    Directory of Open Access Journals (Sweden)

    Herlitz Johan

    2010-08-01

    Full Text Available Abstract Background Guidelines recommend that all health care professionals should be able to perform cardiopulmonary resuscitation (CPR, including the use of an automated external defibrillator. Theoretical knowledge of CPR is then necessary. The aim of this study was to investigate how much theoretical knowledge in CPR would increase among all categories of health care professionals lacking training in CPR, in an intervention hospital, after a systematic standardised training. Their results were compared with the staff at a control hospital with an ongoing annual CPR training programme. Methods Health care professionals at two hospitals, with a total of 3144 employees, answered a multiple-choice questionnaire before and after training in CPR. Bootstrapped chi-square tests and Fisher's exact test were used for the statistical analyses. Results In the intervention hospital, physicians had the highest knowledge pre-test, but other health care professionals including nurses and assistant nurses reached a relatively high level post-test. Improvement was inversely related to the level of previous knowledge and was thus most marked among other health care professionals and least marked among physicians. The staff at the control hospital had a significantly higher level of knowledge pre-test than the intervention hospital, whereas the opposite was found post-test. Conclusions Overall theoretical knowledge increased after systematic standardised training in CPR. The increase was more pronounced for those without previous training and for those staff categories with the least medical education.

  9. Theoretical study on a multivariate feedback control of a sodium-heated steam generator

    International Nuclear Information System (INIS)

    Takahashi, R.; Maruyama, Y.; Oikawa, T.

    1984-01-01

    This paper applies the connection of a multivariate feedback controller with a state estimator to a 1-MW sodium-heated steam generator for LMFBR theoretically, to obtain a control strategy which emphasizes, from the view point of safety and availability of the FBR plant, that a superheat of 30 0 C should be required for the evaporator steam. This involves a trial to study the feasibility for the estimation of such an inaccessible variable as the dry-out location of tubes and utilize the state estimate to design a feedback controller of steam generators. The Kalman filter tested was found to generate reasonable estimates of the transient process variables of the steam generator and can provide a major advantage of regulating steam condition of the system even in the presence of contamination by a rather high level of measurement noise in the view point of economic uses of micro- and/or minicomputers. (orig.)

  10. Semileptonic (Λb → Λc eV) decay in a field theoretic quark model

    International Nuclear Information System (INIS)

    Das, R.K.; Panda, A.R.; Sahoo, R.K.; Swain, M.R.

    2002-01-01

    The semileptonic decay width of heavy baryons such as (Λ b → Λ c eV) has been estimated in the framework of a nonrelativistic field theoretic quark model where four component quark field operators along with a harmonic oscillator wave function are used to describe translationally invariant hadronic states. The present estimation does not make an explicit use of heavy quark symmetry and has a reasonable agreement with the experimentally measured decay width, polarisation ratio and form factors with the harmonic oscillator radii and quark momentum distribution inside the hadron as free parameters. (author)

  11. Homicide and domestic violence. Are there different psychological profiles mediated by previous exerted on the victim?

    Directory of Open Access Journals (Sweden)

    Montserrat Yepes

    2009-07-01

    Full Text Available A sample of 46 men was evaluated with the DAPP (Questionnaire of Domestic Aggressor Psychological Profile. All were inmates convicted for various degrees of violence against their wives in different prisons. The sample was divided into three groups: homicides without previous violence against their wives (H (n=11, homicides with previous violence (VH (n=9 and domestic batterers without previous homicide attempts against their partners (B (n=26. The aim of the study was to analyze the possible existence of three different kinds of profiles and more specifically if it’s possible to obtain an independent profile for domestic homicides with previous episodes of violence against their wives. The results neither confirm the hypothesis as whole nor for the violent homicides. However, differences between groups were obtained in the admission and description of the facts, in the risk of future violence, in some sociodemographical characteristics (i.e., level of education, social status, in the couple relationship, in the dissatisfaction concerning the unachieved ideal woman, in the use of extreme physical force during the aggression, the time of the first aggression, the use of verbal threats during the aggression, explanation of the events to the family and the period of time between the beginning of the romantic relationship and the manifestation of violence. The implications of the results for the theoretical frameworks proposed and future research are discussed.

  12. Comments on mutagenesis risk estimation

    International Nuclear Information System (INIS)

    Russell, W.L.

    1976-01-01

    Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements

  13. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  14. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  15. Use of prompt gamma emissions from polyethylene to estimate neutron ambient dose equivalent

    Energy Technology Data Exchange (ETDEWEB)

    Priyada, P.; Sarkar, P.K., E-mail: pradip.sarkar@manipal.edu

    2015-06-11

    The possibility of using measured prompt gamma emissions from polyethylene to estimate neutron ambient dose equivalent is explored theoretically. Monte Carlo simulations have been carried out using the FLUKA code to calculate the response of a high density polyethylene cylinder to emit prompt gammas from interaction of neutrons with the nuclei of hydrogen and carbon present in polyethylene. The neutron energy dependent responses of hydrogen and carbon nuclei are combined appropriately to match the energy dependent neutron fluence to ambient dose equivalent conversion coefficients. The proposed method is tested initially with simulated spectra and then validated using experimental measurements with an Am–Be neutron source. Experimental measurements and theoretical simulations have established the feasibility of estimating neutron ambient dose equivalent using measured neutron induced prompt gammas emitted from polyethylene with an overestimation of neutron dose at very low energies. - Highlights: • A new method for estimating H{sup ⁎}(10) using prompt gamma emissions from HDPE. • Linear combination of 2.2 MeV and 4.4 MeV gamma intensities approximates DCC (ICRP). • Feasibility of the method was established theoretically and experimentally. • The response of the present technique is very similar to that of the rem meters.

  16. Headphone-To-Ear Transfer Function Estimation Using Measured Acoustic Parameters

    Directory of Open Access Journals (Sweden)

    Jinlin Liu

    2018-06-01

    Full Text Available This paper proposes to use an optimal five-microphone array method to measure the headphone acoustic reflectance and equivalent sound sources needed in the estimation of headphone-to-ear transfer functions (HpTFs. The performance of this method is theoretically analyzed and experimentally investigated. With the measured acoustic parameters HpTFs for different headphones and ear canal area functions are estimated based on a computational acoustic model. The estimation results show that HpTFs vary considerably with headphones and ear canals, which suggests that individualized compensations for HpTFs are necessary for headphones to reproduce desired sounds for different listeners.

  17. New population-based exome data question the pathogenicity of some genetic variants previously associated with Marfan syndrome

    DEFF Research Database (Denmark)

    Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu

    2014-01-01

    BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...

  18. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    Science.gov (United States)

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  19. Fourier-Malliavin volatility estimation theory and practice

    CERN Document Server

    Mancino, Maria Elvira; Sanfelici, Simona

    2017-01-01

    This volume is a user-friendly presentation of the main theoretical properties of the Fourier-Malliavin volatility estimation, allowing the readers to experience the potential of the approach and its application in various financial settings. Readers are given examples and instruments to implement this methodology in various financial settings and applications of real-life data. A detailed bibliographic reference is included to permit an in-depth study. .

  20. Dramatic nondipole effects in low-energy photoionization: Experimental and theoretical study of Xe 5s

    International Nuclear Information System (INIS)

    Hemmers, O.; Lindle, D.W.; Baker, J.; Hudson, A.; Lotrakul, M.; Tran, I.C.; Guillemin, R.; Stolte, W.C.; Wolska, A.; Yu, S.W.; Kanter, E.P.; Kraessig, B.; Southworth, S.H.; Wehlitz, R.; Rolles, D.; Amusia, M.Ya.; Cheng, K.T.; Chernysheva, L.V.; Johnson, W.R.; Manson, S.T.

    2003-01-01

    The Xe 5s nondipole photoelectron parameter γ is obtained experimentally and theoretically from threshold to ∼200 eV photon energy. Significant nondipole effects are seen even in the threshold region of this valence shell photoionization. In addition, contrary to previous understanding, clear evidence of interchannel coupling among quadrupole photoionization channels is found

  1. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans

    Directory of Open Access Journals (Sweden)

    Alizé Lacoste Jeanson

    2017-05-01

    Full Text Available Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT and lean tissue (LT in such material. An intra-class correlation coefficient (ICC was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS linear regressions and support vector machine regression (SVMR. Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5 and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77 than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08 for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  2. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans.

    Science.gov (United States)

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  3. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    Science.gov (United States)

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  4. On the estimation of the volatility-growth link

    DEFF Research Database (Denmark)

    Launov, Andrey; Posch, Olaf; Wälde, Klaus

    It is common practice to estimate the volatility-growth link by specifying a standard growth equation such that the variance of the error term appears as an explanatory variable in this growth equation. The variance in turn is modelled by a second equation. Hardly any of existing applications...... of this framework includes exogenous controls in this second variance equation. Our theoretical …ndings suggest that the absence of relevant explanatory variables in the variance equation leads to a biased and inconsistent estimate of the volatility-growth link. Our simulations show that this effect is large. Once...... the appropriate controls are included in the variance equation consistency is restored. In short, we suggest that the variance equation must include relevant control variables to estimate the volatility-growth link....

  5. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  6. Estimation and Application of Ecological Memory Functions in Time and Space

    Science.gov (United States)

    Itter, M.; Finley, A. O.; Dawson, A.

    2017-12-01

    A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological

  7. Theoretical evaluation of matrix effects on trapped atomic levels

    Energy Technology Data Exchange (ETDEWEB)

    Das, G.P.; Gruen, D.M.

    1986-06-01

    We suggest a theoretical model for calculating the matrix perturbation on the spectra of atoms trapped in rare gas systems. The model requires the ''potential curves'' of the diatomic system consisting of the trapped atom interacting with one from the matrix and relies on the approximation that the total matrix perturbation is a scalar sum of the pairwise interactions with each of the lattice sites. Calculations are presented for the prototype systems Na in Ar. Attempts are made to obtain ab initio estimates of the Jahn-Teller effects for excited states. Comparison is made with our recent Matrix-Isolation Spectroscopic (MIS) data. 10 refs., 3 tabs.

  8. Theoretical evaluation of matrix effects on trapped atomic levels

    International Nuclear Information System (INIS)

    Das, G.P.; Gruen, D.M.

    1986-06-01

    We suggest a theoretical model for calculating the matrix perturbation on the spectra of atoms trapped in rare gas systems. The model requires the ''potential curves'' of the diatomic system consisting of the trapped atom interacting with one from the matrix and relies on the approximation that the total matrix perturbation is a scalar sum of the pairwise interactions with each of the lattice sites. Calculations are presented for the prototype systems Na in Ar. Attempts are made to obtain ab initio estimates of the Jahn-Teller effects for excited states. Comparison is made with our recent Matrix-Isolation Spectroscopic (MIS) data. 10 refs., 3 tabs

  9. The cluster bootstrap consistency in generalized estimating equations

    KAUST Repository

    Cheng, Guang

    2013-03-01

    The cluster bootstrap resamples clusters or subjects instead of individual observations in order to preserve the dependence within each cluster or subject. In this paper, we provide a theoretical justification of using the cluster bootstrap for the inferences of the generalized estimating equations (GEE) for clustered/longitudinal data. Under the general exchangeable bootstrap weights, we show that the cluster bootstrap yields a consistent approximation of the distribution of the regression estimate, and a consistent approximation of the confidence sets. We also show that a computationally more efficient one-step version of the cluster bootstrap provides asymptotically equivalent inference. © 2012.

  10. Resistance and Renewal in Theoretical Psychology

    DEFF Research Database (Denmark)

    psychology, resistance and renewal, form the overall theme for a selection of theoretical papers that is framed — in this iteration of the International Society for Theoretical Psychology's (ISTP) proceedings — by reflections on the 30 year history of the ISTP as well as by considerations of the future....... The diversity and creativity of the work undertaken within theoretical psychology is further exemplified by papers on the history of the ISTP and theoretical psychology, a new paradigm for functional disorders, experimental introspection and techniques of self, the performativity of psychological science......Theoretical psychologists continue to challenge psychology, related disciplines and the work of other theoretical psychologists through a wide variety of activities that include conceptual clarification and creative theorizing. In many cases, these activities are experienced by the relevant...

  11. Theoretical Studies of TE-Wave Propagation as a Diagnostic for Electron Cloud

    International Nuclear Information System (INIS)

    Penn, Gregory E.; Vay, Jean-Luc

    2010-01-01

    The propagation of TE waves is sensitive to the presence of an electron cloud primarily through phase shifts generated by the altered dielectric function, but can also lead to polarization changes and other effects, especially in the presence of magnetic fields. These effects are studied theoretically and also through simulations using WARP. Examples are shown related to CesrTA parameters, and used to observe different regimes of operation as well as to validate estimates of the phase shift.

  12. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili; Tian, Li; Wang, Desheng

    2008-10-31

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  13. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  14. Theoretical Support of Heat Exchanger Experiments of the EU-CONGA Project

    International Nuclear Information System (INIS)

    Herranz, L. E.; Lopez Jimenez, J.; Munoz-Cobo, J. L.; Palomo, M. J.

    1999-01-01

    In this report the work carried out within the Work Package 5 of the CONGA project under the auspices of the European Union has been presented. Primarily focused on studying from a theoretical perspective the degradation of heat exchangers to be used in next generation of European reactor containments under accident conditions, and particularly the effect of aerosols, the objective has been met quite satisfactorily and the results can be summed up in three specific items: - A mathematical model of a mechanistic nature that has been encapsulated into a FORTRAN code (HTCFOUL) capable of simulating condensation heat transfer to a horizontal finned tube internally cooled. - A theoretical correlation depending upon non-dimensional variables and numbers which embodies most of the HTCFOUL physics and gives results not beyond 20% of actual HTCFOUL estimates. - A reasonable interpretation of the major measurements and observations obtained in the heat exchanger experiments performed within the Work Package 2 of the CONGA project. (Author) 55 refs

  15. Two-compartment, two-sample technique for accurate estimation of effective renal plasma flow: Theoretical development and comparison with other methods

    International Nuclear Information System (INIS)

    Lear, J.L.; Feyerabend, A.; Gregory, C.

    1989-01-01

    Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques

  16. Theoretical Studies of Strongly Interacting Fine Particle Systems

    Science.gov (United States)

    Fearon, Michael

    Available from UMI in association with The British Library. A theoretical analysis of the time dependent behaviour of a system of fine magnetic particles as a function of applied field and temperature was carried out. The model used was based on a theory assuming Neel relaxation with a distribution of particle sizes. This theory predicted a linear variation of S_{max} with temperature and a finite intercept, which is not reflected by experimental observations. The remanence curves of strongly interacting fine-particle systems were also investigated theoretically. It was shown that the Henkel plot of the dc demagnetisation remanence vs the isothermal remanence is a useful representation of interactions. The form of the plot was found to be a reflection of the magnetic and physical microstructure of the material, which is consistent with experimental data. The relationship between the Henkel plot and the noise of a particulate recording medium, another property dependent on the microstructure, is also considered. The Interaction Field Factor (IFF), a single parameter characterising the non-linearity of the Henkel plot, is investigated. These results are consistent with a previous experimental study. Finally the results of the noise power spectral density for erased and saturated recording media are presented, so that characterisation of interparticle interactions may be carried out with greater accuracy.

  17. Capacity Estimation and Near-Capacity Achieving Techniques for Digitally Modulated Communication Systems

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov

    investigation will include linear interference channels of high dimensionality (such as multiple-input multiple-output), and the non-linear optical fiber channel, which has been gathering more and more attention from the information theory community in recent years. In both cases novel CCC estimates and lower......This thesis studies potential improvements that can be made to the current data rates of digital communication systems. The physical layer of the system will be investigated in band-limited scenarios, where high spectral efficiency is necessary in order to meet the ever-growing data rate demand....... Several issues are tackled, both with theoretical and more practical aspects. The theoretical part is mainly concerned with estimating the constellation constrained capacity (CCC) of channels with discrete input, which is an inherent property of digital communication systems. The channels under...

  18. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  19. PREFACE: Conference of Theoretical Physics and Nonlinear Phenomena (CTPNP) 2014: ''From Universe to String's Scale''

    Science.gov (United States)

    2014-10-01

    Theoretical physics is the first step for the development of science and technology. For more than 100 years it has delivered new and sophisticated discoveries which have changed human views of their surroundings and universe. Theoretical physics has also revealed that the governing law in our universe is not deterministic, and it is undoubtedly the foundation of our modern civilization. Contrary to its importance, research in theoretical physics is not well advanced in some developing countries such as Indonesia. This workshop provides the formal meeting in Indonesia devoted to the field of theoretical physics and is organized to cover all subjects of theoretical physics as well as nonlinear phenomena in order to create a gathering place for the theorists in Indonesia and surrounding countries, to motivate young physicists to keep doing active researches in the field and to encourage constructive communication among the community members. Following the success of the tenth previous meetings in this conference series, the eleventh conference was held in Sebelas Maret University (UNS), Surakarta, Indonesia on 15 February 2014. In addition, the conference was proceeded by School of Advance Physics at Gadjah Mada University (UGM), Yogyakarta, on 16-17 February 2014. The conference is expected to provide distinguished experts and students from various research fields of theoretical physics and nonlinear phenomena in Indonesia as well as from other continents the opportunities to present their works and to enhance contacts among them. The introduction to the conference is continued in the pdf.

  20. Parametric cost estimation for space science missions

    Science.gov (United States)

    Lillie, Charles F.; Thompson, Bruce E.

    2008-07-01

    Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.

  1. Time-varying coefficient estimation in SURE models. Application to portfolio management

    DEFF Research Database (Denmark)

    Casas, Isabel; Ferreira, Eva; Orbe, Susan

    This paper provides a detailed analysis of the asymptotic properties of a kernel estimator for a Seemingly Unrelated Regression Equations model with time-varying coefficients (tv-SURE) under very general conditions. Theoretical results together with a simulation study differentiates the cases...

  2. Determination of the Boltzmann constant with cylindrical acoustic gas thermometry: new and previous results combined

    Science.gov (United States)

    Feng, X. J.; Zhang, J. T.; Lin, H.; Gillis, K. A.; Mehl, J. B.; Moldover, M. R.; Zhang, K.; Duan, Y. N.

    2017-10-01

    We report a new determination of the Boltzmann constant k B using a cylindrical acoustic gas thermometer. We determined the length of the copper cavity from measurements of its microwave resonance frequencies. This contrasts with our previous work (Zhang et al 2011 Int. J. Thermophys. 32 1297, Lin et al 2013 Metrologia 50 417, Feng et al 2015 Metrologia 52 S343) that determined the length of a different cavity using two-color optical interferometry. In this new study, the half-widths of the acoustic resonances are closer to their theoretical values than in our previous work. Despite significant changes in resonator design and the way in which the cylinder length is determined, the value of k B is substantially unchanged. We combined this result with our four previous results to calculate a global weighted mean of our k B determinations. The calculation follows CODATA’s method (Mohr and Taylor 2000 Rev. Mod. Phys. 72 351) for obtaining the weighted mean value of k B that accounts for the correlations among the measured quantities in this work and in our four previous determinations of k B. The weighted mean {{\\boldsymbol{\\hat{k}}}{B}} is 1.380 6484(28)  ×  10-23 J K-1 with the relative standard uncertainty of 2.0  ×  10-6. The corresponding value of the universal gas constant is 8.314 459(17) J K-1 mol-1 with the relative standard uncertainty of 2.0  ×  10-6.

  3. Jumps and Betas: A New Framework for Disentangling and Estimating Systematic Risks

    DEFF Research Database (Denmark)

    Todorov, Viktor; Bollerslev, Tim

    market portfolio, we find the estimated diffusive and jump betas with respect to the market to be quite dif- ferent for many of the stocks. Our findings have direct and important implications for empirical asset pricing finance and practical portfolio and risk management decisions.......We provide a new theoretical framework for disentangling and estimating sensitivity towards systematic diffusive and jump risks in the context of factor pricing models. Our estimates of the sensitivities towards systematic risks, or betas, are based on the notion of increasingly finer sampled...

  4. Worldwide Anti-Money Laundering: Regulation: Estimating the Costs and Benefits

    OpenAIRE

    D. Masciandaro; R. Barone

    2008-01-01

    The aim of this article is to offer a simple framework for estimating the benefits and costs of anti-ML regulation, based on a prudent estimation of the economic value of worldwide money laundering. Using the multiplier model of the relationship between criminal markets revenues and money laundering activities and data for 2004, the value of money laundering is equal to US$1.2 trillion (2.7% of the world GDP), while the maximum theoretical benefit in combating money laundering using financial...

  5. Approximation to estimation of critical state

    International Nuclear Information System (INIS)

    Orso, Jose A.; Rosario, Universidad Nacional

    2011-01-01

    The position of the control rod for the critical state of the nuclear reactor depends on several factors; including, but not limited to the temperature and configuration of the fuel elements inside the core. Therefore, the position can not be known in advance. In this paper theoretical estimations are developed to obtain an equation that allows calculating the position of the control rod for the critical state (approximation to critical) of the nuclear reactor RA-4; and will be used to create a software performing the estimation by entering the count rate of the reactor pulse channel and the length obtained from the control rod (in cm). For the final estimation of the approximation to critical state, a function obtained experimentally indicating control rods reactivity according to the function of their position is used, work is done mathematically to obtain a linear function, which gets the length of the control rod, which has to be removed to get the reactor in critical position. (author) [es

  6. Wireless Information-Theoretic Security in an Outdoor Topology with Obstacles: Theoretical Analysis and Experimental Measurements

    Directory of Open Access Journals (Sweden)

    Dagiuklas Tasos

    2011-01-01

    Full Text Available This paper presents a Wireless Information-Theoretic Security (WITS scheme, which has been recently introduced as a robust physical layer-based security solution, especially for infrastructureless networks. An autonomic network of moving users was implemented via 802.11n nodes of an ad hoc network for an outdoor topology with obstacles. Obstructed-Line-of-Sight (OLOS and Non-Line-of-Sight (NLOS propagation scenarios were examined. Low-speed user movement was considered, so that Doppler spread could be discarded. A transmitter and a legitimate receiver exchanged information in the presence of a moving eavesdropper. Average Signal-to-Noise Ratio (SNR values were acquired for both the main and the wiretap channel, and the Probability of Nonzero Secrecy Capacity was calculated based on theoretical formula. Experimental results validate theoretical findings stressing the importance of user location and mobility schemes on the robustness of Wireless Information-Theoretic Security and call for further theoretical analysis.

  7. Strength Estimation of Die Cast Beams Considering Equivalent Porous Defects

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon Shik [Hannam Univ., Daejeon (Korea, Republic of)

    2017-05-15

    As a shop practice, a strength estimation method for die cast parts is suggested, in which various defects such as pores can be allowed. The equivalent porosity is evaluated by combining the stiffness data from a simple elastic test at the part level during the shop practice and the theoretical stiffness data, which are defect free. A porosity equation is derived from Eshelby's inclusion theory. Then, using the Mori-Tanaka method, the porosity value is used to draw a stress-strain curve for the porous material. In this paper, the Hollomon equation is used to capture the strain hardening effect. This stress-strain curve can be used to estimate the strength of a die cast part with porous defects. An elastoplastic theoretical solution is derived for the three-point bending of a die cast beam by using the plastic hinge method as a reference solution for a part with porous defects.

  8. Comparisons Between Experimental and Semi-theoretical Cutting Forces of CCS Disc Cutters

    Science.gov (United States)

    Xia, Yimin; Guo, Ben; Tan, Qing; Zhang, Xuhui; Lan, Hao; Ji, Zhiyong

    2018-05-01

    This paper focuses on comparisons between the experimental and semi-theoretical forces of CCS disc cutters acting on different rocks. The experimental forces obtained from LCM tests were used to evaluate the prediction accuracy of a semi-theoretical CSM model. The results show that the CSM model reliably predicts the normal forces acting on red sandstone and granite, but underestimates the normal forces acting on marble. Some additional LCM test data from the literature were collected to further explore the ability of the CSM model to predict the normal forces acting on rocks of different strengths. The CSM model underestimates the normal forces acting on soft rocks, semi-hard rocks and hard rocks by approximately 38, 38 and 10%, respectively, but very accurately predicts those acting on very hard and extremely hard rocks. A calibration factor is introduced to modify the normal forces estimated by the CSM model. The overall trend of the calibration factor is characterized by an exponential decrease with increasing rock uniaxial compressive strength. The mean fitting ratios between the normal forces estimated by the modified CSM model and the experimental normal forces acting on soft rocks, semi-hard rocks and hard rocks are 1.076, 0.879 and 1.013, respectively. The results indicate that the prediction accuracy and the reliability of the CSM model have been improved.

  9. Studies of Credit and Equity Markets with Concepts of Theoretical Physics

    CERN Document Server

    Münnix, Michael C

    2011-01-01

    Financial markets are becoming increasingly complex. The financial crisis of 2008 to 2009 has demonstrated that an improved understanding of the mechanisms embedded in the market is a key requirement for the estimation of financial risk. Recently, concepts of theoretical physics, in particular concepts of complex systems, have proven to be very useful in this regard. Michael C. Münnix analyses the statistical dependencies in financial markets and develops mathematical models using concepts and methods from physics. The author focuses on aspects that played a key role in the emergence of the recent financial crisis: estimation of credit risk, dynamics of statistical dependencies, and correlations on small time-scales. He visualizes the findings for various large-scale empirical studies of market data. The results give novel insights into the mechanisms of financial markets and allow conclusions on how to reduce financial risk significantly.

  10. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  11. Phase difference estimation method based on data extension and Hilbert transform

    International Nuclear Information System (INIS)

    Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao

    2015-01-01

    To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)

  12. Mesoscopic structure prediction of nanoparticle assembly and coassembly: Theoretical foundation

    KAUST Repository

    Hur, Kahyun

    2010-01-01

    In this work, we present a theoretical framework that unifies polymer field theory and density functional theory in order to efficiently predict ordered nanostructure formation of systems having considerable complexity in terms of molecular structures and interactions. We validate our approach by comparing its predictions with previous simulation results for model systems. We illustrate the flexibility of our approach by applying it to hybrid systems composed of block copolymers and ligand coated nanoparticles. We expect that our approach will enable the treatment of multicomponent self-assembly with a level of molecular complexity that approaches experimental systems. © 2010 American Institute of Physics.

  13. Development and validation of a theoretical test of proficiency for video-assisted thoracoscopic surgery (VATS) lobectomy

    DEFF Research Database (Denmark)

    Savran, Mona M; Hansen, Henrik Jessen; Horsleben Petersen, René

    2015-01-01

    BACKGROUND: Testing stimulates learning, improves long-term retention, and promotes technical performance. No purpose-orientated test of competence in the theoretical aspects of VATS lobectomy has previously been presented. The purpose of this study was, therefore, to develop and gather validity...... performed significantly better than the novices (p better than the novices (p

  14. Signal Validation: A Survey of Theoretical and Experimental Studies at the KFKI Atomic Energy Research Institute

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.

    1996-07-01

    The aim of this survey paper is to collect the results of the theoretical and experimental work that has been done on early failure and change detection, signal/detector validation, parameter estimation and system identification problems in the Applied Reactor Physics Department of the KFKI-AEI. The present paper reports different applications of the theoretical methods using real and computer simulated data. The final goal is two-sided: 1) to better understand the mathematical/physical background of the applied methods and 2) to integrate the useful algorithms into a large, complex diagnostic software system. The software is under development, a preliminary version (called JEDI) has already been accomplished. (author)

  15. Comparisons of theoretically predicted transport from ion temperature gradient instabilities to L-mode tokamak experiments

    International Nuclear Information System (INIS)

    Kotschenreuther, M.; Wong, H.V.; Lyster, P.L.; Berk, H.L.; Denton, R.; Miner, W.H.; Valanju, P.

    1991-12-01

    The theoretical transport from kinetic micro-instabilities driven by ion temperature gradients is a sheared slab is compared to experimentally inferred transport in L-mode tokamaks. Low noise gyrokinetic simulation techniques are used to obtain the ion thermal transport coefficient X. This X is much smaller than in experiments, and so cannot explain L-mode confinement. Previous predictions based on fluid models gave much greater X than experiments. Linear and nonlinear comparisons with the fluid model show that it greatly overestimates transport for experimental parameters. In addition, disagreements among previous analytic and simulation calculations of X in the fluid model are reconciled

  16. Interior Gradient Estimates for Nonuniformly Parabolic Equations II

    Directory of Open Access Journals (Sweden)

    Lieberman Gary M

    2007-01-01

    Full Text Available We prove interior gradient estimates for a large class of parabolic equations in divergence form. Using some simple ideas, we prove these estimates for several types of equations that are not amenable to previous methods. In particular, we have no restrictions on the maximum eigenvalue of the coefficient matrix and we obtain interior gradient estimates for so-called false mean curvature equation.

  17. Hash functions and information theoretic security

    DEFF Research Database (Denmark)

    Bagheri, Nasoor; Knudsen, Lars Ramkilde; Naderi, Majid

    2009-01-01

    Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic...

  18. Thermal remote sensing data for estimating evapotranspiration on a ...

    African Journals Online (AJOL)

    As an alternative to in-situ hydro -physical measurements, theoretical and computer-based models, a method that applies thermal infrared band (6) of Landsat TM data for the estimation of ET on a basin-wide scale is presented. Journal of Applied Science and Technology (JAST) , Vol. 5, Nos. 1 & 2, 2000, pp. 98 - 107 ...

  19. Bayesian estimation of the discrete coefficient of determination.

    Science.gov (United States)

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  20. Risks of cardiovascular adverse events and death in patients with previous stroke undergoing emergency noncardiac, nonintracranial surgery

    DEFF Research Database (Denmark)

    Christiansen, Mia N.; Andersson, Charlotte; Gislason, Gunnar H.

    2017-01-01

    Background: The outcomes of emergent noncardiac, nonintracranial surgery in patients with previous stroke remain unknown. Methods: All emergency surgeries performed in Denmark (2005 to 2011) were analyzed according to time elapsed between previous ischemic stroke and surgery. The risks of 30-day...... mortality and major adverse cardiovascular events were estimated as odds ratios (ORs) and 95% CIs using adjusted logistic regression models in a priori defined groups (reference was no previous stroke). In patients undergoing surgery immediately (within 1 to 3 days) or early after stroke (within 4 to 14...... and general anesthesia less frequent in patients with previous stroke (all P Risks of major adverse cardiovascular events and mortality were high for patients with stroke less than 3 months (20.7 and 16.4% events; OR = 4.71 [95% CI, 4.18 to 5.32] and 1.65 [95% CI, 1.45 to 1.88]), and remained...

  1. Systematic Testing of Belief-Propagation Estimates for Absolute Free Energies in Atomistic Peptides and Proteins.

    Science.gov (United States)

    Donovan-Maiye, Rory M; Langmead, Christopher J; Zuckerman, Daniel M

    2018-01-09

    Motivated by the extremely high computing costs associated with estimates of free energies for biological systems using molecular simulations, we further the exploration of existing "belief propagation" (BP) algorithms for fixed-backbone peptide and protein systems. The precalculation of pairwise interactions among discretized libraries of side-chain conformations, along with representation of protein side chains as nodes in a graphical model, enables direct application of the BP approach, which requires only ∼1 s of single-processor run time after the precalculation stage. We use a "loopy BP" algorithm, which can be seen as an approximate generalization of the transfer-matrix approach to highly connected (i.e., loopy) graphs, and it has previously been applied to protein calculations. We examine the application of loopy BP to several peptides as well as the binding site of the T4 lysozyme L99A mutant. The present study reports on (i) the comparison of the approximate BP results with estimates from unbiased estimators based on the Amber99SB force field; (ii) investigation of the effects of varying library size on BP predictions; and (iii) a theoretical discussion of the discretization effects that can arise in BP calculations. The data suggest that, despite their approximate nature, BP free-energy estimates are highly accurate-indeed, they never fall outside confidence intervals from unbiased estimators for the systems where independent results could be obtained. Furthermore, we find that libraries of sufficiently fine discretization (which diminish library-size sensitivity) can be obtained with standard computing resources in most cases. Altogether, the extremely low computing times and accurate results suggest the BP approach warrants further study.

  2. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  3. Low-dose computed tomography image restoration using previous normal-dose scan

    International Nuclear Information System (INIS)

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-01-01

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  4. Theoretical aspects of fibre laser cutting

    Energy Technology Data Exchange (ETDEWEB)

    Mahrle, A; Beyer, E, E-mail: achim.mahrle@iws.fraunhofer.d [University of Technology Dresden, Institute for Surface and Manufacturing Technology, PO Box, 01062 Dresden (Germany)

    2009-09-07

    Fibre lasers offer distinct advantages over established laser systems with respect to power efficiency, beam guidance and beam quality. Consequently, the potential of these new laser beam sources will be increasingly exploited for laser cutting applications that are conventionally carried out with CO{sub 2} lasers. However, theoretical estimates of the effective absorptivity at the cut front suggest that the shorter wavelength of the fibre laser in combination with its high focusability seems to be primarily advantageous for thin sheet metal cutting whereas the CO{sub 2} laser is probably still capable of cutting thicker materials more efficiently. This surprising result is a consequence of the absorptivity behaviour of metals that shows essential quantitative differences for the corresponding wavelengths of both laser sources as a function of the angle of incidence between the laser beam and the material to be cut. In evaluation of the revealed dependences, solution strategies for an improvement of the efficiency of fibre laser cutting of thicker metal sheets are suggested.

  5. Ant-inspired density estimation via random walks.

    Science.gov (United States)

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  6. Black hole state counting in loop quantum gravity: a number-theoretical approach.

    Science.gov (United States)

    Agulló, Iván; Barbero G, J Fernando; Díaz-Polo, Jacobo; Fernández-Borja, Enrique; Villaseñor, Eduardo J S

    2008-05-30

    We give an efficient method, combining number-theoretic and combinatorial ideas, to exactly compute black hole entropy in the framework of loop quantum gravity. Along the way we provide a complete characterization of the relevant sector of the spectrum of the area operator, including degeneracies, and explicitly determine the number of solutions to the projection constraint. We use a computer implementation of the proposed algorithm to confirm and extend previous results on the detailed structure of the black hole degeneracy spectrum.

  7. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.

    Science.gov (United States)

    Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin

    2016-01-01

    Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.

  8. Theoretical physics 6 quantum mechanics : basics

    CERN Document Server

    Nolting, Wolfgang

    2017-01-01

    This textbook offers a clear and comprehensive introduction to the basics of quantum mechanics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, thus developing the physical understanding further on to quantized states. The first part of the book introduces wave equations while exploring the Schrödinger equation and the hydrogen atom. More complex themes are covered in the second part of the book, which describes the Dirac formulism of quantum mechanics. Ideally suited to undergraduate students with some grounding in classical mechanics and electrodynamics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this...

  9. Pain judgements of patients' relatives: examining the use of social contract theory as theoretical framework.

    Science.gov (United States)

    Kappesser, Judith; de C Williams, Amanda C

    2008-08-01

    Observer underestimation of others' pain was studied using a concept from evolutionary psychology: a cheater detection mechanism from social contract theory, applied to relatives and friends of chronic pain patients. 127 participants estimated characters' pain intensity and fairness of behaviour after reading four vignettes describing characters suffering from pain. Four cues were systematically varied: the character continuing or stopping liked tasks; continuing or stopping disliked tasks; availability of medical evidence; and pain intensity as rated by characters. Results revealed that pain intensity and the two behavioural variables had an effect on pain estimates: high pain self-reports and stopping all tasks led to high pain estimates; pain was estimated to be lowest when characters stopped disliked but continued with liked tasks. This combination was also rated least fair. Results support the use of social contract theory as a theoretical framework to explore pain judgements.

  10. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  11. Estimating the prevalence of infertility in Canada

    Science.gov (United States)

    Bushnik, Tracey; Cook, Jocelynn L.; Yuzpe, A. Albert; Tough, Suzanne; Collins, John

    2012-01-01

    BACKGROUND Over the past 10 years, there has been a significant increase in the use of assisted reproductive technologies in Canada, however, little is known about the overall prevalence of infertility in the population. The purpose of the present study was to estimate the prevalence of current infertility in Canada according to three definitions of the risk of conception. METHODS Data from the infertility component of the 2009–2010 Canadian Community Health Survey were analyzed for married and common-law couples with a female partner aged 18–44. The three definitions of the risk of conception were derived sequentially starting with birth control use in the previous 12 months, adding reported sexual intercourse in the previous 12 months, then pregnancy intent. Prevalence and odds ratios of current infertility were estimated by selected characteristics. RESULTS Estimates of the prevalence of current infertility ranged from 11.5% (95% CI 10.2, 12.9) to 15.7% (95% CI 14.2, 17.4). Each estimate represented an increase in current infertility prevalence in Canada when compared with previous national estimates. Couples with lower parity (0 or 1 child) had significantly higher odds of experiencing current infertility when the female partner was aged 35–44 years versus 18–34 years. Lower odds of experiencing current infertility were observed for multiparous couples regardless of age group of the female partner, when compared with nulliparous couples. CONCLUSIONS The present study suggests that the prevalence of current infertility has increased since the last time it was measured in Canada, and is associated with the age of the female partner and parity. PMID:22258658

  12. Theoretical analysis about early detection of hepatocellular carcinoma by medical imaging procedure

    Energy Technology Data Exchange (ETDEWEB)

    Odano, Ikuo; Hinata, Hiroshi; Hara, Keiji; Sakai, Kunio [Niigata Univ. (Japan). School of Medicine

    1983-04-01

    It is well-known that patients with chronic hepatitis and liver cirrhosis are frequently accompanied by hepatocellular carcinoma (hepatoma). They are called high risk group for hepatoma. In order to detect a small hepatoma, it is reasonable for us to perform screening examinations on these high risk group patients. Optimal screening interval, however, has not been established. In this report, a theoretical analysis was made to estimate optimal screening interval by imaging procedure such as ultrasonography, x-ray computed tomography and scintigraphy. By the analysis of eight cases, mean doubling time of hepatoma was estimated about four months (73 - 143 days). If we want to detect a hepatoma not greater than 3.0cm in diameter, medical screening procedure combining ultrasonography and scintigraphy should be performed once per about nine months.

  13. Econometric estimation of investment utilization, adjustment costs, and technical efficiency in Danish pig farms using hyperbolic distance functions

    DEFF Research Database (Denmark)

    Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund

    2014-01-01

    Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...... of investment activities by the maximum likelihood method so that we can estimate the adjustment costs that occur in the year of the investment and the three following years. Our results show that investments are associated with significant adjustment costs, especially in the year in which the investment...

  14. A comparison of SAR ATR performance with information theoretic predictions

    Science.gov (United States)

    Blacknell, David

    2003-09-01

    Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.

  15. Exchange coupling interactions in a Fe6 complex: A theoretical study using density functional theory

    International Nuclear Information System (INIS)

    Cauchy, Thomas; Ruiz, Eliseo; Alvarez, Santiago

    2006-01-01

    Theoretical methods based on density functional theory have been employed to analyze the exchange interactions in an Fe 6 complex. The calculated exchange coupling constants are consistent with an S=5 ground state and agree well with those reported previously for other Fe III polynuclear complexes. Ferromagnetic interactions may appear through exchange pathways formed by two bridging hydroxo or oxo ligands

  16. Theoretical extension and experimental demonstration of spectral compression in second-harmonic generation by Fresnel-inspired binary phase shaping

    Science.gov (United States)

    Li, Baihong; Dong, Ruifang; Zhou, Conghua; Xiang, Xiao; Li, Yongfang; Zhang, Shougang

    2018-05-01

    Selective two-photon microscopy and high-precision nonlinear spectroscopy rely on efficient spectral compression at the desired frequency. Previously, a Fresnel-inspired binary phase shaping (FIBPS) method was theoretically proposed for spectral compression of two-photon absorption and second-harmonic generation (SHG) with a square-chirped pulse. Here, we theoretically show that the FIBPS can introduce a negative quadratic frequency phase (negative chirp) by analogy with the spatial-domain phase function of Fresnel zone plate. Thus, the previous theoretical model can be extended to the case where the pulse can be transformed limited and in any symmetrical spectral shape. As an example, we experimentally demonstrate spectral compression in SHG by FIBPS for a Gaussian transform-limited pulse and show good agreement with the theory. Given the fundamental pulse bandwidth, a narrower SHG bandwidth with relatively high intensity can be obtained by simply increasing the number of binary phases. The experimental results also verify that our method is superior to that proposed in [Phys. Rev. A 46, 2749 (1992), 10.1103/PhysRevA.46.2749]. This method will significantly facilitate the applications of selective two-photon microscopy and spectroscopy. Moreover, as it can introduce negative dispersion, hence it can also be generalized to other applications in the field of dispersion compensation.

  17. ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS

    International Nuclear Information System (INIS)

    Zhang, X. M.; Zhang, M.; Su, J. T.

    2017-01-01

    It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such as limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.

  18. Theoretical chemistry advances and perspectives

    CERN Document Server

    Eyring, Henry

    1980-01-01

    Theoretical Chemistry: Advances and Perspectives, Volume 5 covers articles concerning all aspects of theoretical chemistry. The book discusses the mean spherical approximation for simple electrolyte solutions; the representation of lattice sums as Mellin-transformed products of theta functions; and the evaluation of two-dimensional lattice sums by number theoretic means. The text also describes an application of contour integration; a lattice model of quantum fluid; as well as the computational aspects of chemical equilibrium in complex systems. Chemists and physicists will find the book usef

  19. Event-based state estimation a stochastic perspective

    CERN Document Server

    Shi, Dawei; Chen, Tongwen

    2016-01-01

    This book explores event-based estimation problems. It shows how several stochastic approaches are developed to maintain estimation performance when sensors perform their updates at slower rates only when needed. The self-contained presentation makes this book suitable for readers with no more than a basic knowledge of probability analysis, matrix algebra and linear systems. The introduction and literature review provide information, while the main content deals with estimation problems from four distinct angles in a stochastic setting, using numerous illustrative examples and comparisons. The text elucidates both theoretical developments and their applications, and is rounded out by a review of open problems. This book is a valuable resource for researchers and students who wish to expand their knowledge and work in the area of event-triggered systems. At the same time, engineers and practitioners in industrial process control will benefit from the event-triggering technique that reduces communication costs ...

  20. Theoretical and experimental study of the dark signal in CMOS image sensors affected by neutron radiation from a nuclear reactor

    Science.gov (United States)

    Xue, Yuanyuan; Wang, Zujun; He, Baoping; Yao, Zhibin; Liu, Minbo; Ma, Wuying; Sheng, Jiangkun; Dong, Guantao; Jin, Junshan

    2017-12-01

    The CMOS image sensors (CISs) are irradiated with neutron from a nuclear reactor. The dark signal in CISs affected by neutron radiation is studied theoretically and experimentally. The Primary knock-on atoms (PKA) energy spectra for 1 MeV incident neutrons are simulated by Geant4. And the theoretical models for the mean dark signal, dark signal non-uniformity (DSNU) and dark signal distribution versus neutron fluence are established. The results are found to be in good agreement with the experimental outputs. Finally, the dark signal in the CISs under the different neutron fluence conditions is estimated. This study provides the theoretical and experimental evidence for the displacement damage effects on the dark signal CISs.

  1. Theoretical model of ruminant adipose tissue metabolism in relation to the whole animal.

    Science.gov (United States)

    Baldwin, R L; Yang, Y T; Crist, K; Grichting, G

    1976-09-01

    Based on theoretical considerations and experimental data, estimates of contributions of adipose tissue to energy expenditures in a lactating cow and a growing steer were developed. The estimates indicate that adipose energy expenditures range between 5 and 10% of total animal heat production dependent on productive function and diet. These energy expenditures can be partitioned among maintenance (3%), lipogenesis (1-5%) and lipolysis and triglyceride resynthesis (less thatn 1.0%). Specific sites at which acute and chronic effectors can act to produce changes in adipose function, and changes in adipose function produced by diet and during pregnancy, lactation and aging were discussed with emphasis being placed on the need for additional, definitive studies of specific interactions among pregnancy, diet, age, lactation and growth in producing ruminants.

  2. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  3. Department of Theoretical Physics - Overview

    International Nuclear Information System (INIS)

    Kwiecinski, J.

    2002-01-01

    Full text: Research activity of the Department of Theoretical Physics concerns theoretical high energy and elementary particle physics, intermediate energy particle physics, theoretical nuclear physics, theory of nuclear matter, theory of quark-gluon plasma and of relativistic heavy ion collisions, theoretical astrophysics and general physics. There is some emphasis on the phenomenological applications of the theoretical research yet the more formal problems are also considered. The detailed summary of the research projects and of the results obtained in various fields is given in the abstracts. Our Department successfully collaborates with other Departments of the Institute as well as with several scientific institutions both in Poland and abroad. In particular, members of our Department participate in the EC network which allows for the mobility of researchers. Several members of our Department have also participated in the research projects funded by the State Committee for Scientific Research. Besides pure research, members of our Department are also involved in graduate and undergraduate teaching activity both at our Institute and at other academic institutions in Cracow. At present, eight students are working towards their Ph.D. degrees under the supervision of senior members of the Department. (author)

  4. Department of Theoretical Physics - Overview

    International Nuclear Information System (INIS)

    Kwiecinski, J.

    2001-01-01

    Full text: Research activity of the Department of Theoretical Physics concerns theoretical high-energy and elementary particle physics, intermediate energy particle physics, theoretical nuclear physics, theory of nuclear matter, theory of quark-gluon plasma and relativistic heavy-ion collisions, theoretical astrophysics and general physics. There is some emphasis on the phenomenological applications of the theoretical research yet more formal problems are also considered. A detailed summary of the research projects and of the results obtained in various field is given in the abstracts. Our Department actively collaborates with other Departments of the Institute as well as with several scientific institutions both in Poland and abroad. In particular, members of our Department participate in the EC network, which stimulates the mobility of researchers. Several members of our Department also participated in the research projects funded by the Polish Committee for Scientific Research (KBN). Besides pure research, members of our Department are also involved in graduate and up graduate teaching activity at our Institute as well as at other academic institution in Cracow. At present nine students are working on their Ph.D. degrees under the supervision of senior members of the Department. (author)

  5. Estimating costs in the economic evaluation of medical technologies.

    Science.gov (United States)

    Luce, B R; Elixhauser, A

    1990-01-01

    The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.

  6. An Entropic Estimator for Linear Inverse Problems

    Directory of Open Access Journals (Sweden)

    Amos Golan

    2012-05-01

    Full Text Available In this paper we examine an Information-Theoretic method for solving noisy linear inverse estimation problems which encompasses under a single framework a whole class of estimation methods. Under this framework, the prior information about the unknown parameters (when such information exists, and constraints on the parameters can be incorporated in the statement of the problem. The method builds on the basics of the maximum entropy principle and consists of transforming the original problem into an estimation of a probability density on an appropriate space naturally associated with the statement of the problem. This estimation method is generic in the sense that it provides a framework for analyzing non-normal models, it is easy to implement and is suitable for all types of inverse problems such as small and or ill-conditioned, noisy data. First order approximation, large sample properties and convergence in distribution are developed as well. Analytical examples, statistics for model comparisons and evaluations, that are inherent to this method, are discussed and complemented with explicit examples.

  7. A Game Theoretical Approach to Hacktivism: Is Attack Likelihood a Product of Risks and Payoffs?

    Science.gov (United States)

    Bodford, Jessica E; Kwan, Virginia S Y

    2018-02-01

    The current study examines hacktivism (i.e., hacking to convey a moral, ethical, or social justice message) through a general game theoretic framework-that is, as a product of costs and benefits. Given the inherent risk of carrying out a hacktivist attack (e.g., legal action, imprisonment), it would be rational for the user to weigh these risks against perceived benefits of carrying out the attack. As such, we examined computer science students' estimations of risks, payoffs, and attack likelihood through a game theoretic design. Furthermore, this study aims at constructing a descriptive profile of potential hacktivists, exploring two predicted covariates of attack decision making, namely, peer prevalence of hacking and sex differences. Contrary to expectations, results suggest that participants' estimations of attack likelihood stemmed solely from expected payoffs, rather than subjective risks. Peer prevalence significantly predicted increased payoffs and attack likelihood, suggesting an underlying descriptive norm in social networks. Notably, we observed no sex differences in the decision to attack, nor in the factors predicting attack likelihood. Implications for policymakers and the understanding and prevention of hacktivism are discussed, as are the possible ramifications of widely communicated payoffs over potential risks in hacking communities.

  8. Photoelectron Angular Distributions of Transition Metal Dioxide Anions - a joint experimental and theoretical study

    Science.gov (United States)

    Iordanov, Ivan; Gunaratne, Dasitha; Harmon, Christopher; Sofo, Jorge; Castleman, A. W., Jr.

    2012-02-01

    Angular-resolved photoelectron spectroscopy (PES) studies of the MO2- (M=Ti, Zr, Hf, Co, Rh) clusters are presented for the first time along with theoretical calculations of their properties. We confirm previously reported non-angular PES results for the vertical detachment energies (VDE), vibrational energies and geometric structures of these clusters and further explore the effect of the 'lanthanide contraction' on the MO2- clusters by comparing the electronic spectra of 4d and 5d transition metal dioxides. Angular-resolved PES provides the angular momentum contributions to the HOMO of these clusters and we use theoretical calculations to examine the HOMO and compare to our experimental results. First-principles calculations are done using both density functional theory (DFT) and the coupled-cluster, singles, doubles and triples (CCSD(T)) methods.

  9. Observation of theoretical power saturation by the KHI free electron laser device

    International Nuclear Information System (INIS)

    Oda, Fumihiko; Yokoyama, Minoru; Kawai, Masayuki; Miura, Hidenori; Koike, Hidehito; Sobajima, Masaaki; Nomaru, Keiji; Kuroda, Haruo

    2002-01-01

    The saturation of free electron laser (FEL) output power by the KHI-FEL device was achieved on 3rd, October 2000 at the wavelength of 9.3 μm. The FEL device has operated thereafter successfully in the wavelength region between 4.0 and 16.0 μm. The macropulse average FEL power of 37.5 kW, which is the theoretical saturation level, has been obtained at the wavelength of 7.9 μm. The net FEL gain was estimated to be 16%. (author)

  10. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  11. Malware Function Estimation Using API in Initial Behavior

    OpenAIRE

    KAWAGUCHI, Naoto; OMOTE, Kazumasa

    2017-01-01

    Malware proliferation has become a serious threat to the Internet in recent years. Most current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze malware. However, estimating the malware functions has been difficult due to the increasing sophistication of malware. Actually, the previous researches do not estimate the...

  12. Theoretical and observational assessments of flare efficiencies

    International Nuclear Information System (INIS)

    Leahey, D.M.; Preston, K.; Strosher, M.

    2000-01-01

    During the processing of hydrocarbon materials, gaseous wastes are flared in an effort to completely burn the waste material and therefore leave behind very little by-products. Complete combustion, however is rarely successful because entrainment of air into the region of combusting gases restricts flame sizes to less than optimum values. The resulting flames are often too small to dissipate the amount of heat associated with complete (100 per cent) combustion efficiency. Flaring, therefore, often results in emissions of gases with more complex molecular structures than just carbon dioxide and water. Polycyclic aromatic hydrocarbons and volatile organic compounds which are indicative of incomplete combustion are often associated with flaring. This theoretical study of flame efficiencies was based on the knowledge of the full range of chemical reactions and associated kinetics. In this study, equations developed by Leahey and Schroeder were used to estimate flame lengths, areas and volumes as functions of flare stack exit velocity, stoichiometric mixing ratio and wind speed. This was followed by an estimate of heats released as part of the combustion process. This was derived from the knowledge of the flame dimensions together with an assumed flame temperature of 1200 K. Combustion efficiencies were then obtained by taking the ratio of estimated actual heat release values to those associated with complete combustion. It was concluded that combustion efficiency decreases significantly with wind speed increases from 1 to 6 m/s. After that initial increase, combustion efficiencies level off at values between 10 to 15 per cent. Propane and ethane were found to burn more efficiently than methane or hydrogen sulfide. 24 refs., 4 tabs., 1 fig., 1 append

  13. Revisiting maximum-a-posteriori estimation in log-concave models: from differential geometry to decision theory

    OpenAIRE

    Pereyra, Marcelo

    2016-01-01

    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...

  14. Theoretical investigations on the high light yield of the LuI3:Ce scintillator

    International Nuclear Information System (INIS)

    Vasil'ev, A.N.; Iskandarova, I.M.; Scherbinin, A.V.; Markov, I.A.; Bagatur'yants, A.A.; Potapkin, B.V.; Srivastava, A.M.; Vartuli, J.S.; Duclos, S.J.

    2009-01-01

    The extremely high scintillation efficiency of lutetium iodide doped by cerium is explained as a result of at least three factors controlling the energy transfer from the host matrix to activator. We propose and theoretically validate the possibility of a new channel of energy transfer to excitons and directly to cerium, namely the Auger process when Lu 4f hole relaxes to the valence band hole with simultaneous creation of additional exciton or excitation of cerium. This process should be efficient in LuI 3 , and inefficient in LuCl 3 . To justify this channel, we perform calculations of density of states using a periodic plane-wave density functional approach. The second factor is the increase of the efficiency of valence hole capture by cerium in the row LuCl 3 -LuBr 3 -LuI 3 . The third one is the increase of the efficiency of energy transfer from self-trapped excitons to cerium ions in the same row. The latter two factors are verified by cluster ab initio calculations. We estimate either the relaxation of these excitations and barriers for the diffusion of self-trapped holes (STH) and self-trapped exciton (STE). The performed estimations theoretically justify the high LuI 3 :Ce 3+ scintillator yield.

  15. Methods for Estimation of Market Power in Electric Power Industry

    Science.gov (United States)

    Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.

    2012-01-01

    The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.

  16. Theoretical Study of Irradiation Effects in Close Binaries

    Directory of Open Access Journals (Sweden)

    Srinivasa Rao, M.

    2009-06-01

    Full Text Available The effect of irradiation is studied in a close binary systemassuming that the secondary component is a point source, moving in a circularorbit. The irradiation effects are calculatedon the atmosphere of the primary component in a 3-dimensional Cartesiancoordinate geometry. In treating the reflection effect theoretically, the totalradiation $(S_mathrm{T}$ is obtained as the sum of the radiation of 1 the effect ofirradiation on the primary component which is calculated by using onedimensional rod model $(S_mathrm{r}$ and 2 the self radiation of the primarycomponent which is calculated by using the solution of radiative transferequation in spherical symmetry $(S_mathrm{s}$. The radiation field is estimated alongthe line of sight of the observer at infinity. It is shown how the radiationfield changes depending on the position of the secondary component.

  17. Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.

    Science.gov (United States)

    Nguyen, Hien D; Wood, Ian A

    2016-04-01

    Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.

  18. Set-theoretic methods in control

    CERN Document Server

    Blanchini, Franco

    2015-01-01

    The second edition of this monograph describes the set-theoretic approach for the control and analysis of dynamic systems, both from a theoretical and practical standpoint.  This approach is linked to fundamental control problems, such as Lyapunov stability analysis and stabilization, optimal control, control under constraints, persistent disturbance rejection, and uncertain systems analysis and synthesis.  Completely self-contained, this book provides a solid foundation of mathematical techniques and applications, extensive references to the relevant literature, and numerous avenues for further theoretical study. All the material from the first edition has been updated to reflect the most recent developments in the field, and a new chapter on switching systems has been added.  Each chapter contains examples, case studies, and exercises to allow for a better understanding of theoretical concepts by practical application. The mathematical language is kept to the minimum level necessary for the adequate for...

  19. Quantitative Estimation of Transmitted and Reflected Lamb Waves at Discontinuity

    International Nuclear Information System (INIS)

    Lim, Hyung Jin; Sohn, Hoon

    2010-01-01

    For the application of Lamb wave to structural health monitoring(SHM), understanding its physical characteristic and interaction between Lamb wave and defect of the host structure is an important issue. In this study, reflected, transmitted and mode converted Lamb waves at discontinuity of a plate structure were simulated and the amplitude ratios are calculated theoretically using Modal decomposition method. The predicted results were verified comparing with finite element method(FEM) and experimental results simulating attached PZTs. The result shows that the theoretical prediction is close to the FEM and the experimental verification. Moreover, quantitative estimation method was suggested using amplitude ratio of Lamb wave at discontinuity

  20. Robust bearing estimation for 3-component stations

    International Nuclear Information System (INIS)

    CLAASSEN, JOHN P.

    2000-01-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings

  1. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  2. Factors determining early internationalization of entrepreneurial SMEs: Theoretical approach

    Directory of Open Access Journals (Sweden)

    Agne Matiusinaite

    2015-12-01

    Full Text Available Purpose – This study extends the scientific discussion of early internationalization of SMEs. The main purpose of this paper – to develop a theoretical framework to investigate factors determining early internationalization of international new ventures. Design/methodology/approach – The conceptual framework is built on the analysis and synthesis of scientific literature. Findings – This paper presents different factors, which determine early internationalization of international new ventures. These factors are divided to entrepreneurial, organizational and contextual factors. We argue that early internationalization of international new ventures is defined by entrepreneurial characteristics and previous experience of the entrepreneur, opportunities recognition and exploitation, risk tolerance, specific of the organization, involvement into networks and contextual factors. Study proved that only interaction between factors and categories has an effect for business development and successful implementation of early internationalization. Research limitations/implications – The research was conducted on the theoretical basis of scientific literature. The future studies could include a practical confirmation or denial of such allocation of factors. Originality/value – The originality of this study lies in the finding that factor itself has limited effect to early internationalization. Only the interoperability of categories and factors gives a positive impact on early internationalization of entrepreneurial SMEs.

  3. Fuel Burn Estimation Using Real Track Data

    Science.gov (United States)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  4. Modeling goals and functions of control and safety systems - theoretical foundations and extensions of MFM

    International Nuclear Information System (INIS)

    Lind, M.

    2005-10-01

    Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)

  5. Studies of the tautomeric equilibrium of 1,3-thiazolidine-2-thione: Theoretical and experimental approaches

    Energy Technology Data Exchange (ETDEWEB)

    Abbehausen, Camilla; Paiva, Raphael E.F. de [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil); Formiga, Andre L.B., E-mail: formiga@iqm.unicamp.br [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil); Corbi, Pedro P. [Institute of Chemistry, University of Campinas - UNICAMP, P.O. Box 6154, 13083-970 Campinas, SP (Brazil)

    2012-10-26

    Highlights: Black-Right-Pointing-Pointer Tautomeric equilibrium in solution. Black-Right-Pointing-Pointer Spectroscopic and theoretical studies. Black-Right-Pointing-Pointer UV-Vis theoretical and experimental spectra. Black-Right-Pointing-Pointer {sup 1}H NMR theoretical and experimental spectra. -- Abstract: The tautomeric equilibrium of the thione/thiol forms of 1,3-thiazolidine-2-thione was studied by nuclear magnetic resonance, infrared and ultraviolet-visible spectroscopies. Density functional theory was used to support the experimental data and indicates the predominance of the thione tautomer in the solid state, being in agreement with previously reported crystallographic data. In solution, the tautomeric equilibrium was evaluated using {sup 1}H NMR at different temperatures in four deuterated solvents acetonitrile, dimethylsulfoxide, chloroform and methanol. The equilibrium constants, K = (thiol)/(thione), and free Gibbs energies were obtained by integration of N bonded hydrogen signals at each temperature for each solvent, excluding methanol. The endothermic tautomerization is entropy-driven and the combined effect of solvent and temperature can be used to achieve almost 50% thiol concentrations in solution. The nature of the electronic transitions was investigated theoretically and the assignment of the bands was made using time-dependent DFT as well as the influence of solvent on the energy of the most important bands of the spectra.

  6. Modeling goals and functions of control and safety systems -theoretical foundations and extensions of MFM

    Energy Technology Data Exchange (ETDEWEB)

    Lind, M. [Oersted - DTU, Kgs. Lyngby (Denmark)

    2005-10-01

    Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)

  7. Theoretical model of gravitational perturbation of current collector axisymmetric flow field

    Science.gov (United States)

    Walker, John S.; Brown, Samuel H.; Sondergaard, Neal A.

    1990-05-01

    Some designs of liquid-metal current collectors in homopolar motors and generators are essentially rotating liquid-metal fluids in cylindrical channels with free surfaces and will, at critical rotational speeds, become unstable. An investigation at David Taylor Research Center is being performed to understand the role of gravity in modifying this ejection instability. Some gravitational effects can be theoretically treated by perturbation techniques on the axisymmetric base flow of the liquid metal. This leads to a modification of previously calculated critical-current-collector ejection values neglecting gravity effects. The purpose of this paper is to document the derivation of the mathematical model which determines the perturbation of the liquid-metal base flow due to gravitational effects. Since gravity is a small force compared with the centrifugal effects, the base flow solutions can be expanded in inverse powers of the Froude number and modified liquid-flow profiles can be determined as a function of the azimuthal angle. This model will be used in later work to theoretically study the effects of gravity on the ejection point of the current collector.

  8. Theoretical models to predict the mechanical behavior of thick composite tubes

    Directory of Open Access Journals (Sweden)

    Volnei Tita

    2012-02-01

    Full Text Available This paper shows theoretical models (analytical formulations to predict the mechanical behavior of thick composite tubes and how some parameters can influence this behavior. Thus, firstly, it was developed the analytical formulations for a pressurized tube made of composite material with a single thick ply and only one lamination angle. For this case, the stress distribution and the displacement fields are investigated as function of different lamination angles and reinforcement volume fractions. The results obtained by the theoretical model are physic consistent and coherent with the literature information. After that, the previous formulations are extended in order to predict the mechanical behavior of a thick laminated tube. Both analytical formulations are implemented as a computational tool via Matlab code. The results obtained by the computational tool are compared to the finite element analyses, and the stress distribution is considered coherent. Moreover, the engineering computational tool is used to perform failure analysis, using different types of failure criteria, which identifies the damaged ply and the mode of failure.

  9. A Framework for Estimating Long Term Driver Behavior

    Directory of Open Access Journals (Sweden)

    Vijay Gadepally

    2017-01-01

    Full Text Available We present a framework for estimation of long term driver behavior for autonomous vehicles and vehicle safety systems. The Hybrid State System and Hidden Markov Model (HSS+HMM system discussed in this article is capable of describing the hybrid characteristics of driver and vehicle coupling. In our model, driving observations follow a continuous trajectory that can be measured to create continuous state estimates. These continuous state estimates can then be used to estimate the most likely driver state using decision-behavior coupling inherent to the HSS+HMM system. The HSS+HMM system is encompassed in a HSS structure and intersystem connectivity is determined by using signal processing and pattern recognition techniques. The proposed method is suitable for a number of autonomous and vehicle safety scenarios such as estimating intent of other vehicles near intersections or avoiding hazardous driving events such as unexpected lane changes. The long term driver behavior estimation system involves an extended HSS+HMM structure that is capable of including external information in the estimation process. Through the grafting and pruning of metastates, the HSS+HMM system can be dynamically updated to best represent driver choices given external information. Three application examples are also provided to elucidate the theoretical system.

  10. Estimation of Oil Production Rates in Reservoirs Exposed to Focused Vibrational Energy

    KAUST Repository

    Jeong, Chanseok; Kallivokas, Loukas F.; Huh, Chun; Lake, Larry W.

    2014-01-01

    the production rate of remaining oil from existing oil fields. To date, there are few theoretical studies on estimating how much bypassed oil within an oil reservoir could be mobilized by such vibrational stimulation. To fill this gap, this paper presents a

  11. Studying Economic Space: Synthesis of Balance and Game-Theoretic Methods of Modelling

    Directory of Open Access Journals (Sweden)

    Natalia Gennadyevna Zakharchenko

    2015-12-01

    Full Text Available The article introduces questions about development of models used to study economic space. The author proposes the model that combines balance and game-theoretic methods for estimating system effects of economic agents’ interactions in multi-level economic space. The model is applied to research interactions between economic agents that are spatially heterogeneous within the Russian Far East. In the model the economic space of region is considered in a territorial dimension (the first level of decomposing space and also in territorial and product dimensions (the second level of decomposing space. The paper shows the mechanism of system effects formation that exists in the economic space of region. The author estimates system effects, analyses the real allocation of these effects between economic agents and identifies three types of local industrial markets: with zero, positive and negative system effects

  12. Theoretical study of the ionization of B2H5

    International Nuclear Information System (INIS)

    Curtiss, L.A.; Pople, J.A.

    1989-01-01

    Ab initio molecular orbital calculations at the G1 level of theory have been carried out on neutral B 2 H 5 radical, doubly bridged B 2 H + 5 cation, and the first triplet excited state of B 2 H + 5 . Singly bridged B 2 H 5 is 4.0 kcal/mol (without zero-point energies) more stable than doubly bridged B 2 H 5 . Based on this work and previous theoretical work on triply bridged B 2 H + 5 , ionization potentials (vertical and adiabatic) are determined for B 2 H 5 . The adiabatic ionization potentials of the two B 2 H 5 structures are 6.94 eV (singly bridged) and 7.53 eV (doubly bridged). A very large difference is found between the vertical and adiabatic ionization potentials (3.37 eV) of the singly bridged B 2 H 5 structure. The first triplet state of B 2 H + 5 is found to be 4.55 eV higher in energy than the lowest energy B 2 H + 5 cation (triply bridged). The results of this theoretical study support the interpretation of Ruscic, Schwarz, and Berkowitz of their recent photoionization measurements on B 2 H 5

  13. Theoretical investigation of the secondary ionization in krypton and xenon

    International Nuclear Information System (INIS)

    Saffo, M.E.

    1986-01-01

    A theoretical investigation of the secondary ionization processes that responsible for the pre-breakdown ionization current growth in a uniform electric field was studied in krypton and xenon gases, especially at low values of E/P 0 which is corresponding to high values of pressure, since there are a number of possible secondary ionization processes. It is interesting to carry out a quantitative analysis for the generalized secondary ionization coefficient obtained previously by many workers in terms of the production of excited states and their diffusion to the cathode and their destruction rate in the gas body. From energy balance equation for the electrons in the discharge, the fractional percentage energy losses of ionization, excitation, and elastic collisions to the total energy gained by the electron from the field has been calculated for krypton and xenon, as a result of such calculations; the conclusion drawn is that at low values of E/P 0 the main energy loss of electrons are in excited collision. Therefore, we are adopting a theoretical calculation for W/α under the assumption that the photo-electron emission at the cathode is the predominated secondary ionization process. 14 tabs.; 12 figs.; 64 refs

  14. Using Simplified Thermal Inertia to Determine the Theoretical Dry Line in Feature Space for Evapotranspiration Retrieval

    Directory of Open Access Journals (Sweden)

    Sujuan Mi

    2015-08-01

    Full Text Available With the development of quantitative remote sensing, regional evapotranspiration (ET modeling based on the feature space has made substantial progress. Among those feature space based evapotranspiration models, accurate determination of the dry/wet lines remains a challenging task. This paper reports the development of a new model, named DDTI (Determination of Dry line by Thermal Inertia, which determines the theoretical dry line based on the relationship between the thermal inertia and the soil moisture. The Simplified Thermal Inertia value estimated in the North China Plain is consistent with the value measured in the laboratory. Three evaluation methods, which are based on the comparison of the locations of the theoretical dry line determined by two models (DDTI model and the heat energy balance model, the comparison of ET results, and the comparison of the evaporative fraction between the estimates from the two models and the in situ measurements, were used to assess the performance of the new model DDTI. The location of the theoretical dry line determined by DDTI is more reasonable than that determined by the heat energy balance model. ET estimated from DDTI has an RMSE (Root Mean Square Error of 56.77 W/m2 and a bias of 27.17 W/m2; while the heat energy balance model estimated ET with an RMSE of 83.36 W/m2 and a bias of −38.42 W/m2. When comparing the coeffcient of determination for the two models with the observations from Yucheng, DDTI demonstrated ET with an R2 of 0.9065; while the heat energy balance model has an R2 of 0.7729. When compared with the in situ measurements of evaporative fraction (EF at Yucheng Experimental Station, the ET model based on DDTI reproduces the pixel scale EF with an RMSE of 0.149, much lower than that based on the heat energy balance model which has an RMSE of 0.220. Also, the EF bias between the DDTI model and the in situ measurements is 0.064, lower than the EF bias of the heat energy balance model

  15. Spatial Working Memory Capacity Predicts Bias in Estimates of Location

    Science.gov (United States)

    Crawford, L. Elizabeth; Landy, David; Salthouse, Timothy A.

    2016-01-01

    Spatial memory research has attributed systematic bias in location estimates to a combination of a noisy memory trace with a prior structure that people impose on the space. Little is known about intraindividual stability and interindividual variation in these patterns of bias. In the current work, we align recent empirical and theoretical work on…

  16. Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks

    KAUST Repository

    Douik, Ahmed

    2017-08-30

    Consider a large-scale anonymous wireless sensor network with unknown cardinality. In such graphs, each node has no information about the network topology and only possesses a unique identifier. This paper introduces a novel distributed algorithm for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more efficient coding schemes for the network, the topology discovery provides a reliable way for routing packets. The proposed algorithm is shown to produce a cardinality estimate proportional to the best linear unbiased estimator for dense graphs and specific running times. Simulation results attest the theoretical results and reveal that, for a reasonable running time, querying a small group of nodes is sufficient to perform an estimation of 95% of the whole network. Applications of this work include estimating the number of Internet of Things (IoT) sensor devices, online social users, active protein cells, etc.

  17. Tracking using motion estimation with physically motivated inter-region constraints

    KAUST Repository

    Arif, Omar; Sundaramoorthi, Ganesh; Hong, Byungwoo; Yezzi, Anthony J.

    2014-01-01

    We propose a method for tracking structures (e.g., ventricles and myocardium) in cardiac images (e.g., magnetic resonance) by propagating forward in time a previous estimate of the structures using a new physically motivated motion estimation scheme

  18. Estimation and Properties of a Time-Varying GQARCH(1,1-M Model

    Directory of Open Access Journals (Sweden)

    Sofia Anyfantaki

    2011-01-01

    analysis of these models computationally infeasible. This paper outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only ( computational operations, where is the sample size. Furthermore, the theoretical dynamic properties of a time-varying GQARCH(1,1-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.

  19. The Affordable Care Act, Insurance Coverage, and Health Care Utilization of Previously Incarcerated Young Men: 2008-2015.

    Science.gov (United States)

    Winkelman, Tyler N A; Choi, HwaJung; Davis, Matthew M

    2017-05-01

    To estimate health insurance and health care utilization patterns among previously incarcerated men following implementation of the Affordable Care Act's (ACA's) Medicaid expansion and Marketplace plans in 2014. We performed serial cross-sectional analyses using data from the National Survey of Family Growth between 2008 and 2015. Our sample included men aged 18 to 44 years with (n = 3476) and without (n = 8702) a history of incarceration. Uninsurance declined significantly among previously incarcerated men after ACA implementation (-5.9 percentage points; 95% confidence interval [CI] = -11.5, -0.4), primarily because of an increase in private insurance (6.8 percentage points; 95% CI = 0.1, 13.3). Previously incarcerated men accounted for a large proportion of the remaining uninsured (38.6%) in 2014 to 2015. Following ACA implementation, previously incarcerated men continued to be significantly less likely to report a regular source of primary care and more likely to report emergency department use than were never-incarcerated peers. Health insurance coverage improved among previously incarcerated men following ACA implementation. However, these men account for a substantial proportion of the remaining uninsured. Previously incarcerated men continue to lack primary care and frequently utilize acute care services.

  20. Initial results of CyberKnife treatment for recurrent previously irradiated head and neck cancer

    International Nuclear Information System (INIS)

    Himei, Kengo; Katsui, Kuniaki; Yoshida, Atsushi

    2003-01-01

    The purpose of this study was to evaluate the efficacy of CyberKnife for recurrent previously irradiated head and neck cancer. Thirty-one patients with recurrent previously irradiated head and neck cancer were treated with a CyberKnife from July 1999 to March 2002 at Okayama Kyokuto Hospital were retrospectively studied. The accumulated dose was 28-80 Gy (median 60 Gy). The interval between CyberKnife treatment and previous radiotherapy was 0.4-429.5 months (median 16.3 months). Primary lesions were nasopharynx: 7, maxillary sinus: 6, tongue: 5, ethmoid sinus: 3, and others: 1. The pathology was squamous cell carcinoma: 25, adenoid cystic carcinoma: 4, and others: 2. Symptoms were pain: 8, and nasal bleeding: 2. The prescribed dose was 15.0-40.3 Gy (median 32.3 Gy) as for the marginal dose. The response rate (complete response (CR)+partial response (PR)) and local control rate (CR+PR+no change (NC)) was 74% and 94% respectively. Pain disappeared for 4 cases, relief was obtained for 4 cases and no change for 2 cases and nasal bleeding disappeared for 2 cases for an improvement of symptoms. An adverse effects were observed as mucositis in 5 cases and neck swelling in one case. Prognosis of recurrent previously irradiated head and neck cancer was estimated as poor. Our early experience shows that CyberKnife is expected to be feasible treatment for recurrent previously irradiated head and neck cancer, and for the reduction adverse effects and maintenance of useful quality of life (QOL) for patients. (author)

  1. Information and crystal structure estimation

    International Nuclear Information System (INIS)

    Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.

    1984-01-01

    The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)

  2. Towards integrating control and information theories from information-theoretic measures to control performance limitations

    CERN Document Server

    Fang, Song; Ishii, Hideaki

    2017-01-01

    This book investigates the performance limitation issues in networked feedback systems. The fact that networked feedback systems consist of control and communication devices and systems calls for the integration of control theory and information theory. The primary contributions of this book lie in two aspects: the newly-proposed information-theoretic measures and the newly-discovered control performance limitations. We first propose a number of information notions to facilitate the analysis. Using those notions, classes of performance limitations of networked feedback systems, as well as state estimation systems, are then investigated. In general, the book presents a unique, cohesive treatment of performance limitation issues of networked feedback systems via an information-theoretic approach. This book is believed to be the first to treat the aforementioned subjects systematically and in a unified manner, offering a unique perspective differing from existing books.

  3. Photophysical characteristics of three novel benzanthrone derivatives: Experimental and theoretical estimation of dipole moments

    International Nuclear Information System (INIS)

    Siddlingeshwar, B.; Hanagodimath, S.M.; Kirilova, E.M.; Kirilov, Georgii K.

    2011-01-01

    The effect of solvents on absorption and fluorescence spectra and dipole moments of novel benzanthrone derivatives such as 3-N-(N',N'-Dimethylformamidino) benzanthrone (1), 3-N-(N',N'-Diethylacetamidino) benzanthrone (2) and 3-morpholinobenzanthrone (3) have been studied in various solvents. The fluorescence lifetime of the dyes (1-3) in chloroform were also recorded. Bathochromic shift observed in the absorption and fluorescence spectra of these molecules with increasing solvent polarity indicates that the transitions involved are π→π * . Using the theory of solvatochromism, the difference in the excited-state (μ e ) and the ground-state (μ e ) dipole moments was estimated from Lippert-Mataga, Bakhshiev, Kawski-Chamma-Viallet, and McRae equations by using the variation of Stokes shift with the solvent's relative permittivity and refractive index. AM1 and PM6 semiempirical molecular calculations using MOPAC and ab-initio calculations at B3LYP/6-31 G * level of theory using Gaussian 03 software were carried out to estimate the ground-state dipole moments and some other physicochemical properties. Further, the change in dipole moment value (Δμ) was also calculated by using the variation of Stokes shift with the molecular-microscopic empirical solvent polarity parameter (E T N ). The excited-state dipole moments observed are larger than their ground-state counterparts, indicating a substantial redistribution of the π-electron densities in a more polar excited state for all the systems investigated.

  4. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  5. Critical review of methods for the estimation of actual evapotranspiration in hydrological models

    CSIR Research Space (South Africa)

    Jovanovic, Nebojsa

    2012-01-01

    Full Text Available The chapter is structured in three parts, namely: i) A theoretical overview of evapotranspiration processes, including the principle of atmospheric demand-soil water supply, ii) A review of methods and techniques to measure and estimate actual...

  6. Wolf Attack Probability: A Theoretical Security Measure in Biometric Authentication Systems

    Science.gov (United States)

    Une, Masashi; Otsuka, Akira; Imai, Hideki

    This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding “wolves” into the system to be attacked. The “wolf” means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.

  7. On the degrees of freedom of reduced-rank estimators in multivariate regression.

    Science.gov (United States)

    Mukherjee, A; Chen, K; Wang, N; Zhu, J

    We study the effective degrees of freedom of a general class of reduced-rank estimators for multivariate regression in the framework of Stein's unbiased risk estimation. A finite-sample exact unbiased estimator is derived that admits a closed-form expression in terms of the thresholded singular values of the least-squares solution and hence is readily computable. The results continue to hold in the high-dimensional setting where both the predictor and the response dimensions may be larger than the sample size. The derived analytical form facilitates the investigation of theoretical properties and provides new insights into the empirical behaviour of the degrees of freedom. In particular, we examine the differences and connections between the proposed estimator and a commonly-used naive estimator. The use of the proposed estimator leads to efficient and accurate prediction risk estimation and model selection, as demonstrated by simulation studies and a data example.

  8. Study of some physical aspects previous to design of an exponential experiment; Estudio de algunos aspectos fisicos previos al diseno de una experiencia exponencial

    Energy Technology Data Exchange (ETDEWEB)

    Caro, R; Francisco, J L. de

    1961-07-01

    This report presents the theoretical study of some physical aspects previous to the design of an exponential facility. The are: Fast and slow flux distribution in the multiplicative medium and in the thermal column, slowing down in the thermal column, geometrical distribution and minimum needed intensity of sources access channels and perturbations produced by possible variations in its position and intensity. (Author) 4 refs.

  9. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  10. Microbial growth yield estimates from thermodynamics and its importance for degradation of pesticides and formation of biogenic non-extractable residues

    DEFF Research Database (Denmark)

    Brock, Andreas Libonati; Kästner, M.; Trapp, Stefan

    2017-01-01

    NER. Formation of microbial mass can be estimated from the microbial growth yield, but experimental data is rare. Instead, we suggest using prediction methods for the theoretical yield based on thermodynamics. Recently, we presented the Microbial Turnover to Biomass (MTB) method that needs a minimum...... and using the released CO2 as a measure for microbial activity, we predicted a range for the formation of biogenic NER. For the majority of the pesticides, a considerable fraction of the NER was estimated to be biogenic. This novel approach provides a theoretical foundation applicable to the evaluation...

  11. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    Science.gov (United States)

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  12. Theoretical study of the properties of X-ray diffraction moiré fringes. I

    International Nuclear Information System (INIS)

    Yoshimura, Jun-ichi

    2015-01-01

    A detailed and comprehensive theoretical description of X-ray diffraction moiré fringes for a bicrystal specimen is given on the basis of a calculation by plane-wave dynamical diffraction theory, where the effect of the Pendellösung intensity oscillation on the moiré pattern is explained in detail. A detailed and comprehensive theoretical description of X-ray diffraction moiré fringes for a bicrystal specimen is given on the basis of a calculation by plane-wave dynamical diffraction theory. Firstly, prior to discussing the main subject of the paper, a previous article [Yoshimura (1997 ▸). Acta Cryst. A53, 810–812] on the two-dimensionality of diffraction moiré patterns is restated on a thorough calculation of the moiré interference phase. Then, the properties of moiré fringes derived from the above theory are explained for the case of a plane-wave diffraction image, where the significant effect of Pendellösung intensity oscillation on the moiré pattern when the crystal is strained is described in detail with theoretically simulated moiré images. Although such plane-wave moiré images are not widely observed in a nearly pure form, knowledge of their properties is essential for the understanding of diffraction moiré fringes in general

  13. Generalized synchronization-based multiparameter estimation in modulated time-delayed systems

    Science.gov (United States)

    Ghosh, Dibakar; Bhattacharyya, Bidyut K.

    2011-09-01

    We propose a nonlinear active observer based generalized synchronization scheme for multiparameter estimation in time-delayed systems with periodic time delay. A sufficient condition for parameter estimation is derived using Krasovskii-Lyapunov theory. The suggested tool proves to be globally and asymptotically stable by means of Krasovskii-Lyapunov method. With this effective method, parameter identification and generalized synchronization of modulated time-delayed systems with all the system parameters unknown, can be achieved simultaneously. We restrict our study for multiple parameter estimation in modulated time-delayed systems with single state variable only. Theoretical proof and numerical simulation demonstrate the effectiveness and feasibility of the proposed technique. The block diagram of electronic circuit for multiple time delay system shows that the method is easily applicable in practical communication problems.

  14. Prevalence of pain in the head, back and feet in refugees previously exposed to torture: a ten-year follow-up study

    DEFF Research Database (Denmark)

    Olsen, Dorthe Reff; Montgomery, Edith; Bøjholm, Søren

    2007-01-01

    AIM: To estimate change over 10 years concerning the prevalence of pain in the head, back and feet, among previously tortured refugees settled in Denmark, and to compare associations between methods of torture and prevalent pain at baseline and at 10-year follow-up. METHODS: 139 refugees previous...... associated with the type and bodily focus of the torture. This presents a considerable challenge to future evidence-based development of effective treatment programs....

  15. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  16. Ibrutinib versus previous standard of care: an adjusted comparison in patients with relapsed/refractory chronic lymphocytic leukaemia.

    Science.gov (United States)

    Hansson, Lotta; Asklid, Anna; Diels, Joris; Eketorp-Sylvan, Sandra; Repits, Johanna; Søltoft, Frans; Jäger, Ulrich; Österborg, Anders

    2017-10-01

    This study explored the relative efficacy of ibrutinib versus previous standard-of-care treatments in relapsed/refractory patients with chronic lymphocytic leukaemia (CLL), using multivariate regression modelling to adjust for baseline prognostic factors. Individual patient data were collected from an observational Stockholm cohort of consecutive patients (n = 144) diagnosed with CLL between 2002 and 2013 who had received at least second-line treatment. Data were compared with results of the RESONATE clinical trial. A multivariate Cox proportional hazards regression model was used which estimated the hazard ratio (HR) of ibrutinib versus previous standard of care. The adjusted HR of ibrutinib versus the previous standard-of-care cohort was 0.15 (p ibrutinib in the RESONATE study were significantly longer than with previous standard-of-care regimens used in second or later lines in routine healthcare. The approach used, which must be interpreted with caution, compares patient-level data from a clinical trial with outcomes observed in a daily clinical practice and may complement results from randomised trials or provide preliminary wider comparative information until phase 3 data exist.

  17. A Novel Methodology for Estimating State-Of-Charge of Li-Ion Batteries Using Advanced Parameters Estimation

    Directory of Open Access Journals (Sweden)

    Ibrahim M. Safwat

    2017-11-01

    Full Text Available State-of-charge (SOC estimations of Li-ion batteries have been the focus of many research studies in previous years. Many articles discussed the dynamic model’s parameters estimation of the Li-ion battery, where the fixed forgetting factor recursive least square estimation methodology is employed. However, the change rate of each parameter to reach the true value is not taken into consideration, which may tend to poor estimation. This article discusses this issue, and proposes two solutions to solve it. The first solution is the usage of a variable forgetting factor instead of a fixed one, while the second solution is defining a vector of forgetting factors, which means one factor for each parameter. After parameters estimation, a new idea is proposed to estimate state-of-charge (SOC of the Li-ion battery based on Newton’s method. Also, the error percentage and computational cost are discussed and compared with that of nonlinear Kalman filters. This methodology is applied on a 36 V 30 A Li-ion pack to validate this idea.

  18. Theoretical clarity is not “Manicheanism”

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2011-01-01

    It is argued that in order to establish a new theoretical approach to information science it is necessary to express disagreement with some established views. The “social turn” in information science is not just exemplified in relation to the works of Marcia Bates but in relation to many different...... researchers in the field. Therefore it should not be taken personally, and the debate should focus on the substance. Marcia Bates has contributed considerably to information science. In spite of this some of her theoretical points of departure may be challenged. It is important to seek theoretical clarity...... and this may involve a degree of schematic confrontation that should not be confused with theoretical one-sidedness, “Manicheanism” or lack of respect....

  19. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  20. Theoretical aspects of studies of oxide and semiconductor surfaces using low energy positrons

    Science.gov (United States)

    Fazleev, N. G.; Maddox, W. B.; Weiss, A. H.

    2011-01-01

    This paper presents the results of a theoretical study of positron surface and bulk states and annihilation characteristics of surface trapped positrons at the oxidized Cu(100) single crystal and at both As- and Ga-rich reconstructed GaAs(100) surfaces. The variations in atomic structure and chemical composition of the topmost layers of the surfaces associated with oxidation and reconstructions and the charge redistribution at the surfaces are found to affect localization and spatial extent of the positron surface-state wave functions. The computed positron binding energy, work function, and annihilation characteristics reveal their sensitivity to charge transfer effects, atomic structure and chemical composition of the topmost layers of the surfaces. Theoretical positron annihilation probabilities with relevant core electrons computed for the oxidized Cu(100) surface and the As- and Ga-rich reconstructed GaAs(100) surfaces are compared with experimental ones estimated from the positron annihilation induced Auger peak intensities measured from these surfaces.

  1. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    Energy Technology Data Exchange (ETDEWEB)

    Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  2. New Optics in Analysis and Work Estimation of Abroad Romanian Citizens

    Directory of Open Access Journals (Sweden)

    Nita Dobrota

    2006-09-01

    In this study we try to show the necessity of a national strategy for a full and efficient employment of whole Romanian labour potential, lay emphasis on the active population. In the same time through the indicators based on theoretical-methodological analyses and empiric estimated, we suggest some possible coordinates of this strategy.

  3. A theoretical and experimental dose rate study at a multipurpose gamma irradiation facility in Ghana

    International Nuclear Information System (INIS)

    Sackey, Tracey A.

    2015-01-01

    Sv/h and 50mSv/yr. The practical dose rate measurements were consistent with previous measurements. Theoretical calculations of dose rates in the irradiation chamber were computed using the F-line software provided by the Hungarian suppliers of the facility. It computes dose rates based on the dimensions and parameters of the source and uses the line source approximation method. Respective values obtained for the personnel and goods door, rooftop, deionizer room and outside the chamber were 0.082μSv/h, 0.076μSv/h, 0.080μSv/h, 0.193μSv/h and 0.07 μSv/h which indicates that the theoretical estimations of dose rates were generally lower than the measured values. Personnel dose (Thermolumiscent dosimeters) history for a period of 12 months (January to December 2013) was analyzed to estimate yearly doses received by radiation workers of the facility. Collective doses of Hp(10); 2.22mSv and Hp(0.07); 2.29mSv were obtained from the analysis. They were well below limits approved by the regulatory authority. (au)

  4. The power of theoretical knowledge.

    Science.gov (United States)

    Alligood, Martha Raile

    2011-10-01

    Nursing theoretical knowledge has demonstrated powerful contributions to education, research, administration and professional practice for guiding nursing thought and action. That knowledge has shifted the primary focus of the nurse from nursing functions to the person. Theoretical views of the person raise new questions, create new approaches and instruments for nursing research, and expand nursing scholarship throughout the world.

  5. On the estimation of channel power distribution for PHWRs (Paper No. HMT-66-87)

    International Nuclear Information System (INIS)

    Parikh, M.V.; Kumar, A.N.; Krishnamohan, B.; Bhaskara Rao, P.

    1987-01-01

    In the case of PHWRs the estimation of channel power distribution is an important safety criteria. In this paper two methods based on theoretical estimation and the measured parameter are described. The comparison made shows good agreement in the prediction of channel power by both the methods. A parametric study in one of the measured parameters is also made which gives better agreement in results obtained. (author). 3 tabs

  6. Theoretical foundation, goals, and methodology of a new science--biospherics

    Science.gov (United States)

    Shaffer, J A

    1994-01-01

    Scientific endeavor is motivated by mankind's needs, desires, and inherent nature to explore. The history of scientific revolutions involves paradigmatic breakthroughs that uncover previously unknown perspectives by which a phenomenon can be viewed. In this issue a noted scientist, Nickolai Pechurkin, gives a seminal brief on the theoretical foundation, goals, and methodology leading to a new science--biospherics. While biospherics has so far eluded a simple definition, it is not something taken from "whole cloth." Biospherics has many antecedents, but most noticeably arises from the global scale research and theory associated with the technological advances of the Space-Age. The Space-Age also created the need for totally closed life-support systems which involve experimentation with artificial biospheres.

  7. Research in theoretical nuclear physics. Final report, April 1, 1993 - March 31, 1996

    International Nuclear Information System (INIS)

    Udagawa, Takeshi

    1997-08-01

    This report describes the accomplishments in basic research in nuclear physics carried out by the theoretical nuclear physics group in the Department of Physics at the University of Texas at Austin, during the period of April 1, 1993 to March 31, 1996. The work done covers three separate areas, low energy nuclear reactions, intermediate energy physics, and nuclear structure studies. Although the various subjects are spread among different areas, they are all based on two techniques that they have developed in previous years. These techniques are: (a) a powerful method for continuum-random-phase-approximation (CRPA) calculations of the nuclear response; and, (b) the direct reaction approach to complete and incomplete fusion reactions, which enables them to describe on a single footing all the different types of nuclear reactions, i.e., complete fusion, incomplete fusion and direct reactions, in a systematic way based on a single theoretical framework. In this report, the authors first summarize their achievements in these three areas, and then present final remarks

  8. Department of Theoretical Physics - Overview

    International Nuclear Information System (INIS)

    Kwiecinski, J.

    2000-01-01

    Full text: Research activity of the Department of Theoretical Physics concerns theoretical high-energy and elementary particle physics, intermediate energy particle physics, theoretical nuclear physics, theory of nuclear matter, theory of quark-gluon plasma and of relativistic heavy-ion collisions, theoretical astrophysics and general physics. There is some emphasis on the phenomenological applications of the theoretical research, yet the more formal problems are also considered. The detailed summary of the research projects and of the results obtained in various fields is given in the abstracts. Our Department actively collaborates with other Departments of the Institute as well as with several scientific institutions both in Poland and abroad. In particular members of our Department participate in the EC network which allows mobility of researchers. Several members of our Department have also participated in the research projects funded by the Polish Committee for Scientific Research (KBN). The complete list of grants is listed separately. Besides pure research, members of our Department are also involved in graduate and undergraduate teaching activity both at our Institute as well as at other academic institutions in Cracow. At present five students are working for their Ph.D. or MSc degrees under supervision of the senior members from the Department. We continue our participation at the EC SOCRATES-ERASMUS educational programme which allows exchange of graduate students between our Department and the Department of Physics of the University of Durham in the UK. (author)

  9. An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings.

    Science.gov (United States)

    Liaw, Siaw-Teng; Pearce, Christopher; Liyanage, Harshana; Liaw, Gladys S S; de Lusignan, Simon

    2014-01-01

    Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework. We searched PubMed, Medline, Web of Science, ABI Inform (Proquest) and Business Source Premier (EBSCO) using the terms curation, information ecosystem, data quality management (DQM), data governance, information governance (IG) and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise. There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly 'big-data' environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle. The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.

  10. Internal Medicine residents use heuristics to estimate disease probability

    OpenAIRE

    Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin

    2015-01-01

    Background: Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. Method: We randomized 55 In...

  11. Precision of a new bedside method for estimation of the circulating blood volume

    DEFF Research Database (Denmark)

    Christensen, P; Eriksen, B; Henneberg, S W

    1993-01-01

    The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before...... ventilation with the CO gas mixture. The amount of CO administered during each determination of CBV resulted in an increase in the CO saturation of hemoglobin of 2.1%-3.9%. A theoretical noise propagation analysis was performed by means of the Monte Carlo method. The analysis showed that a CO dose...... patients. The coefficients of variation were 6.2% and 4.7% in healthy and diseased subjects, respectively. Furthermore, the day-to-day variation of the method with respect to the total amount of circulating hemoglobin (nHb) and CBV was determined from duplicate estimates separated by 24-48 h. In conclusion...

  12. Theoretical Mathematics

    Science.gov (United States)

    Stöltzner, Michael

    Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.

  13. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  14. Accuracy of the Estimated Core Temperature (ECTemp) Algorithm in Estimating Circadian Rhythm Indicators

    Science.gov (United States)

    2017-04-12

    measurement of CT outside of stringent laboratory environments. This study evaluated ECTempTM, a heart rate- based extended Kalman Filter CT...were lower than heart-rate based models analyzed in previous studies. As such, ECTempTM demonstrates strong potential for estimating circadian CT...control of heat transfer from the core to the extremities [11]. As such, heart rate plays a pivotal role in thermoregulation as a primary

  15. Experimental and theoretical study of the energy loss of C and O in Zn

    Energy Technology Data Exchange (ETDEWEB)

    Cantero, E. D.; Lantschner, G. H.; Arista, N. R. [Centro Atomico Bariloche and Instituto Balseiro, Comision Nacional de Energia Atomica, 8400 San Carlos de Bariloche (Argentina); Montanari, C. C.; Miraglia, J. E. [Instituto de Astronomia y Fisica del Espacio (CONICET-UBA), Buenos Aires (Argentina); Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires (Argentina); Behar, M.; Fadanelli, R. C. [Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Avenida Bento Goncalves 9500, Porto Alegre-RS (Brazil)

    2011-07-15

    We present a combined experimental-theoretical study of the energy loss of C and O ions in Zn in the energy range 50-1000 keV/amu. This contribution has a double purpose, experimental and theoretical. On the experimental side, we present stopping power measurements that fill a gap in the literature for these projectile-target combinations and cover an extended energy range, including the stopping maximum. On the theoretical side, we make a quantitative test on the applicability of various theoretical approaches to calculate the energy loss of heavy swift ions in solids. The description is performed using different models for valence and inner-shell electrons: a nonperturbative scattering calculation based on the transport cross section formalism to describe the Zn valence electron contribution, and two different models for the inner-shell contribution: the shellwise local plasma approximation (SLPA) and the convolution approximation for swift particles (CasP). The experimental results indicate that C is the limit for the applicability of the SLPA approach, which previously was successfully applied to projectiles from H to B. We find that this model clearly overestimates the stopping data for O ions. The origin of these discrepancies is related to the perturbative approximation involved in the SLPA. This shortcoming has been solved by using the nonperturbative CasP results to describe the inner-shell contribution, which yields a very good agreement with the experiments for both C and O ions.

  16. Representing general theoretical concepts in structural equation models: The role of composite variables

    Science.gov (United States)

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  17. Perspectives of experimental and theoretical studies of self-organized dust structures in complex plasmas under microgravity conditions

    International Nuclear Information System (INIS)

    Tsytovich, V N

    2015-01-01

    We review research aimed at understanding the phenomena occurring in a complex plasma under microgravity conditions. Some aspects of the work already performed are considered that have not previously been given sufficient attention but which are potentially crucial for future work. These aspects, in particular, include the observation of compact dust structures that are estimated to be capable of confining all components of a dust plasma in a bounded spatial volume; experimental evidence of the nonlinear screening of dust particles; and experimental evidence of the excitation of collective electric fields. In theoretical terms, novel collective attraction processes between likely charged dust particles are discussed and all schemes of the shadowy attraction between dust particles used earlier, including in attempts to interpret observations, are reviewed and evaluated. Dust structures are considered from the standpoint of the current self-organization theory. It is emphasized that phase transitions between states of self-organized systems differ significantly from those in homogeneous states and that the phase diagrams should be constructed in terms of the parameters of a self-organized structure and cannot be constructed in terms of the temperature and density or similar parameters of homogeneous structures. Using the existing theoretical approaches to modeling self-organized structures in dust plasmas, the parameter distribution of a structure is recalculated for a simpler model that includes the quasineutrality condition and neglects diffusion. These calculations indicate that under microgravity conditions, any self-organized structure can contain a limited number of dust particles and is finite in size. The maximum possible number of particles in a structure determines the characteristic inter-grain distance in dust crystals that can be created under microgravity conditions. Crystallization criteria for the structures are examined and the quasispherical

  18. CLUSTER DEVELOPMENT OF ECONOMY OF REGION: THEORETICAL OPPORTUNITIES AND PRACTICAL EXPERIENCE

    Directory of Open Access Journals (Sweden)

    O.A. Romanova

    2007-12-01

    Full Text Available In clause theoretical approaches to formation industrial cluster кластеров in regions of the Russian Federation are considered. Оn the basis of which the methodological scheme of the project of cluster creation is offered. On an example hi-tech cluster “Titanic valley”, created in Sverdlovsk area, basic elements of its formation reveal: a substantiation of use cluster forms of the organization of business, an estimation of preconditions of creation, the description of the cluster purposes, problems, structures; mechanism of management and stages of realization of the project of cluster creation, measures of the state support.

  19. Theoretical particle physics. Progress report, June 1, 1981-April 30, 1982

    International Nuclear Information System (INIS)

    Hendry, A.W.; Lichtenberg, D.B.; Weingarten, D.H.

    1982-05-01

    In the past year the group has worked on a substantial number of problems in elementary particle theory. Possible patterns of symmetry breaking in the SO(10) grand-unified theory were examined, pion-nucleon scattering was analyzed to reveal the existence of many high-spin resonances, the quark model was applied with relativistic kinematics to the study of mesons, baryons, and glueballs, the strength of the quark-gluon coupling constant was estimated, and effects of wave-function mixing in baryons was calculated. On the more theoretical side, Monte Carlo calculations were done in lattice theories, including phi 4 theories, gauge field theories, quantum gravity, and the Ising model

  20. Fixation of theoretical ambiguities in the improved fits to $xF_{3}$ CCFR data at the next-to-next-to-leading order and beyond

    CERN Document Server

    Kataev, A L; Sidorov, A V

    2003-01-01

    Using new theoretical information on the NNLO and N$^3$LO perturbative QCD corrections to renormalization-group quantities of odd $xF_3$ Mellin moments, we perform the reanalysis of the CCFR'97 data for $xF_3$ structure function. The fits were done without and with twist-4 power suppressed terms. Theoretical questions of applicability of the renormalon-inspired large-$\\beta_0$ approximation for estimating NNLO and N$^3$LO terms in the coefficient functions of odd $xF_3$ moments and even non-singlet moments of $F_2$ are considered. The comparison with [1/1] Pad\\'e estimates is presented. The small $x$ behaviour of the phenomenological model for $xF_3$ is compared with available theoretical predictions. The $x$-shape of the twist-4 contributions is determined. Indications of oscillating-type behaviour of $h(x)$ are obtained from more detailed NNLO fits when only statistical uncertainties are taken into account. The scale-dependent uncertainties of $\\alpha_s(M_Z)$ are analyzed. The obtained NNLO and approximate ...

  1. Estimation of Snow Parameters from Dual-Wavelength Airborne Radar

    Science.gov (United States)

    Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

    1997-01-01

    Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.

  2. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  3. Theoretical physics division

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    Research activities of the theoretical physics division for 1979 are described. Short summaries are given of specific research work in the following fields: nuclear structure, nuclear reactions, intermediate energy physics, elementary particles [fr

  4. Ocean subsurface particulate backscatter estimation from CALIPSO spaceborne lidar measurements

    Science.gov (United States)

    Chen, Peng; Pan, Delu; Wang, Tianyu; Mao, Zhihua

    2017-10-01

    A method for ocean subsurface particulate backscatter estimation from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite was demonstrated. The effects of the CALIOP receiver's transient response on the attenuated backscatter profile were first removed. The two-way transmittance of the overlying atmosphere was then estimated as the ratio of the measured ocean surface attenuated backscatter to the theoretical value computed from wind driven wave slope variance. Finally, particulate backscatter was estimated from the depolarization ratio as the ratio of the column-integrated cross-polarized and co-polarized channels. Statistical results show that the derived particulate backscatter by the method based on CALIOP data agree reasonably well with chlorophyll-a concentration using MODIS data. It indicates a potential use of space-borne lidar to estimate global primary productivity and particulate carbon stock.

  5. A Theoretical Assessment of the Formation of IT clusters in Kazakhstan: Approaches and Positive Effects

    OpenAIRE

    Anel A. Kireyeva

    2016-01-01

    Abstract The aim of this research is to develop new theoretical approaches of the formation of IT clusters in order to strengthen of trend of the innovative industrialization and competitiveness of the country. Keeping with the previous literature, this study determines by the novelty of the problem, concerning the formation of IT clusters, which can become a driving force of transformation due to the interaction, improving efficiency and introducing advanced technology. In this research,...

  6. Robust recognition via information theoretic learning

    CERN Document Server

    He, Ran; Yuan, Xiaotong; Wang, Liang

    2014-01-01

    This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip

  7. Inverse design of an isotropic suspended Kirchhoff rod: theoretical and numerical results on the uniqueness of the natural shape

    Science.gov (United States)

    Bertails-Descoubes, Florence; Derouet-Jourdan, Alexandre; Romero, Victor; Lazarus, Arnaud

    2018-04-01

    Solving the equations for Kirchhoff elastic rods has been widely explored for decades in mathematics, physics and computer science, with significant applications in the modelling of thin flexible structures such as DNA, hair or climbing plants. As demonstrated in previous experimental and theoretical studies, the natural curvature plays an important role in the equilibrium shape of a Kirchhoff rod, even in the simple case where the rod is isotropic and suspended under gravity. In this paper, we investigate the reverse problem: can we characterize the natural curvature of a suspended isotropic rod, given an equilibrium curve? We prove that although there exists an infinite number of natural curvatures that are compatible with the prescribed equilibrium, they are all equivalent in the sense that they correspond to a unique natural shape for the rod. This natural shape can be computed efficiently by solving in sequence three linear initial value problems, starting from any framing of the input curve. We provide several numerical experiments to illustrate this uniqueness result, and finally discuss its potential impact on non-invasive parameter estimation and inverse design of thin elastic rods.

  8. Direction-of-Arrival Estimation with Coarray ESPRIT for Coprime Array.

    Science.gov (United States)

    Zhou, Chengwei; Zhou, Jinfang

    2017-08-03

    A coprime array is capable of achieving more degrees-of-freedom for direction-of-arrival (DOA) estimation than a uniform linear array when utilizing the same number of sensors. However, existing algorithms exploiting coprime array usually adopt predefined spatial sampling grids for optimization problem design or include spectrum peak search process for DOA estimation, resulting in the contradiction between estimation performance and computational complexity. To address this problem, we introduce the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) to the coprime coarray domain, and propose a novel coarray ESPRIT-based DOA estimation algorithm to efficiently retrieve the off-grid DOAs. Specifically, the coprime coarray statistics are derived according to the received signals from a coprime array to ensure the degrees-of-freedom (DOF) superiority, where a pair of shift invariant uniform linear subarrays is extracted. The rotational invariance of the signal subspaces corresponding to the underlying subarrays is then investigated based on the coprime coarray covariance matrix, and the incorporation of ESPRIT in the coarray domain makes it feasible to formulate the closed-form solution for DOA estimation. Theoretical analyses and simulation results verify the efficiency and the effectiveness of the proposed DOA estimation algorithm.

  9. Estimation of bone mineral density by digital X-ray radiogrammetry: theoretical background and clinical testing

    DEFF Research Database (Denmark)

    Rosholm, A; Hyldstrup, L; Backsgaard, L

    2002-01-01

    A new automated radiogrammetric method to estimate bone mineral density (BMD) from a single radiograph of the hand and forearm is described. Five regions of interest in radius, ulna and the three middle metacarpal bones are identified and approximately 1800 geometrical measurements from these bones......-ray absoptiometry (r = 0.86, p Relative to this age-related loss, the reported short...... sites and a precision that potentially allows for relatively short observation intervals. Udgivelsesdato: 2001-null...

  10. Fast evaluation of theoretical uncertainties with Sherpa and MCgrid

    Energy Technology Data Exchange (ETDEWEB)

    Bothmann, Enrico; Schumann, Steffen [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany); Schoenherr, Marek [Physik-Institut, Universitaet Zuerich (Switzerland)

    2016-07-01

    The determination of theoretical error estimates and PDF/α{sub s}-fits requires fast evaluations of cross sections for varied QCD input parameters. These include PDFs, the strong coupling constant α{sub S} and the renormalization and factorization scales. Beyond leading order QCD, a full dedicated calculation for each set of parameters is often too time-consuming, certainly when performing PDF-fits. In this talk we discuss two methods to overcome this issue for any QCD NLO calculation: The novel event-reweighting feature in Sherpa and the automated generation of interpolations grids using the recently introduced MCgrid interface. For the Sherpa event-reweighting we present the newly added support for the all-order PDF dependencies of parton shower emissions. Building on that we discuss the sensitivity of high precision observables to those dependencies.

  11. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...

  12. Demystifying Theoretical Sampling in Grounded Theory Research

    Directory of Open Access Journals (Sweden)

    Jenna Breckenridge BSc(Hons,Ph.D.Candidate

    2009-06-01

    Full Text Available Theoretical sampling is a central tenet of classic grounded theory and is essential to the development and refinement of a theory that is ‘grounded’ in data. While many authors appear to share concurrent definitions of theoretical sampling, the ways in which the process is actually executed remain largely elusive and inconsistent. As such, employing and describing the theoretical sampling process can present a particular challenge to novice researchers embarking upon their first grounded theory study. This article has been written in response to the challenges faced by the first author whilst writing a grounded theory proposal. It is intended to clarify theoretical sampling for new grounded theory researchers, offering some insight into the practicalities of selecting and employing a theoretical sampling strategy. It demonstrates that the credibility of a theory cannot be dissociated from the process by which it has been generated and seeks to encourage and challenge researchers to approach theoretical sampling in a way that is apposite to the core principles of the classic grounded theory methodology.

  13. Theoretical developments in SUSY

    Energy Technology Data Exchange (ETDEWEB)

    Shifman, M. [University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2009-01-15

    I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I review theoretical developments of the recent years in non-perturbative supersymmetry. (orig.)

  14. Theoretical Developments in SUSY

    Science.gov (United States)

    Shifman, M.

    2009-01-01

    I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I will review theoretical developments of the recent years in non-perturbative supersymmetry.

  15. Theoretical developments in SUSY

    International Nuclear Information System (INIS)

    Shifman, M.

    2009-01-01

    I am proud that I was personally acquainted with Julius Wess. We first met in 1999 when I was working on the Yuri Golfand Memorial Volume (The Many Faces of the Superworld, World Scientific, Singapore, 2000). I invited him to contribute, and he accepted this invitation with enthusiasm. After that, we met many times, mostly at various conferences in Germany and elsewhere. I was lucky to discuss with Julius questions of theoretical physics, and hear his recollections on how supersymmetry was born. In physics Julius was a visionary, who paved the way to generations of followers. In everyday life he was a kind and modest person, always ready to extend a helping hand to people who were in need of his help. I remember him telling me how concerned he was about the fate of theoretical physicists in Eastern Europe after the demise of communism. His ties with Israeli physicists bore a special character. I am honored by the opportunity to contribute an article to the Julius Wess Memorial Volume. I review theoretical developments of the recent years in non-perturbative supersymmetry. (orig.)

  16. Estimations of parameters in Pareto reliability model in the presence of masked data

    International Nuclear Information System (INIS)

    Sarhan, Ammar M.

    2003-01-01

    Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained

  17. Supplement to final report for ''Theoretical studies in tokamaks''

    International Nuclear Information System (INIS)

    McBride, J.B.

    1992-07-01

    In a previous report we summarized the results obtained for Task I of Contract Number AC03-88ER53270 for the two-year period of performance of the work supported by the contract. That report constituted the final report for Task 1. Since then, the contract was extended and the funding for Task I was incremented with $35K of new funds. The purpose for incrementing the contract was to begin a collaboration with the PBX-M group at Princeton Plasma Physics Laboratory (PPPL) in the area of ion Bernstein wave (IBW) effects in the PBX-M experiment. This report summarizes the initial results of that collaboration obtained under the incremental continuation funding. In the intervening period, experimental and theoretical program directions changed so that no further funds were committed to Task 1

  18. Game theoretic approaches for spectrum redistribution

    CERN Document Server

    Wu, Fan

    2014-01-01

    This brief examines issues of spectrum allocation for the limited resources of radio spectrum. It uses a game-theoretic perspective, in which the nodes in the wireless network are rational and always pursue their own objectives. It provides a systematic study of the approaches that can guarantee the system's convergence at an equilibrium state, in which the system performance is optimal or sub-optimal. The author provides a short tutorial on game theory, explains game-theoretic channel allocation in clique and in multi-hop wireless networks and explores challenges in designing game-theoretic m

  19. Distributed Dynamic State Estimation with Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Du, Pengwei; Huang, Zhenyu; Sun, Yannan; Diao, Ruisheng; Kalsi, Karanjit; Anderson, Kevin K.; Li, Yulan; Lee, Barry

    2011-08-04

    Increasing complexity associated with large-scale renewable resources and novel smart-grid technologies necessitates real-time monitoring and control. Our previous work applied the extended Kalman filter (EKF) with the use of phasor measurement data (PMU) for dynamic state estimation. However, high computation complexity creates significant challenges for real-time applications. In this paper, the problem of distributed dynamic state estimation is investigated. One domain decomposition method is proposed to utilize decentralized computing resources. The performance of distributed dynamic state estimation is tested on a 16-machine, 68-bus test system.

  20. A Theoretical Approach

    African Journals Online (AJOL)

    NICO

    L-rhamnose and L-fucose: A Theoretical Approach ... L-ramnose and L-fucose, by means of the Monte Carlo conformational search method. The energy of the conformers ..... which indicates an increased probability for the occurrence of.

  1. Cost estimating relationships for nuclear power plant operationa and maintenance

    International Nuclear Information System (INIS)

    Bowers, H.I.; Fuller, L.C.; Myers, M.L.

    1987-11-01

    Revised cost estimating relationships for 1987 are presented for estimating annual nonfuel operation and maintenance (O and M) costs for light-water reactor (LWR) nuclear power plants, which update guidelines published previously in 1982. The purpose of these cost estimating relationships is for use in long range planning and evaluations of the economics of nuclear energy for electric power generation. A listing of a computer program, LWROM, implementing the cost estimating relationships and written in advanced BASIC for IBM personal computers, is included

  2. Development of Mathematical Model and Analysis Code for Estimating Drop Behavior of the Control Rod Assembly in the Sodium Cooled Fast Reactor

    International Nuclear Information System (INIS)

    Oh, Se-Hong; Kang, SeungHoon; Choi, Choengryul; Yoon, Kyung Ho; Cheon, Jin Sik

    2016-01-01

    On receiving the scram signal, the control rod assemblies are released to fall into the reactor core by its weight. Thus drop time and falling velocity of the control rod assembly must be estimated for the safety evaluation. There are three typical ways to estimate the drop behavior of the control rod assembly in scram action: Experimental, numerical and theoretical methods. But experimental and numerical(CFD) method require a lot of cost and time. Thus, these methods are difficult to apply to the initial design process. In this study, mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. Mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. A simplified control rod assembly model is considered to minimize the uncertainty in the development process. And the hydraulic circuit analysis technique is adopted to evaluate the internal/external flow distribution of the control rod assembly. Finally, the theoretical analysis code(named as HEXCON) has been developed based on the mathematical model. To verify the reliability of the developed code, CFD analysis has been conducted. And a calculation using the developed analysis code was carried out under the same condition, and both results were compared

  3. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  4. Research in theoretical radiobiology and radiological physics: Final technical report, May 1, 1986-December 31, 1987

    International Nuclear Information System (INIS)

    Rossi, H.H.

    1987-12-01

    The work carried out under this Grant included continuation of experimental research in radiation biophysics that was previously initiated, as well as extensive theoretical efforts primarily devoted to the further development of the Theory of Dual Radiation Action. Documents for international and national organizations were prepared or evaluated. Attendance at a number of international and national meetings and scientific correspondence were also in part supported by this grant. 14 refs

  5. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  6. Interactions Between Indirect DC-Voltage Estimation and Circulating Current Controllers of MMC-Based HVDC Transmission Systems

    DEFF Research Database (Denmark)

    Wickramasinghe, Harith R.; Konstantinou, Georgios; Pou, Josep

    2018-01-01

    Estimation-based indirect dc-voltage control in MMCs interacts with circulating current control methods. This paper proposes an estimation-based indirect dc-voltage control method for MMC-HVDC systems and analyzes its performance compared to alternative estimations. The interactions between......-state and transient performance is demonstrated using a benchmark MMC-HVDC transmission system, implemented in a real-time digital simulator. The results verify the theoretical evaluations and illustrate the operation and performance of the proposed indirect dc-voltage control method....

  7. The theoretical risk of non-melanoma skin cancer from environmental radon exposure

    International Nuclear Information System (INIS)

    Eatough, J.P.; Henshaw, D.L.

    1995-01-01

    The skin cancer risk theoretically attributable to radon associated alpha particle radiation is calculated on the basis of recent dosimetry, and published radiation risk factors. The results suggest that of the order of 2% (range 1%-10%) of non-melanoma skin cancers in the UK may be associated with radon exposure at the average UK radon concentration of 20 Bq m -3 . The range quoted is due solely to uncertainties in the estimate of the radon dose to the basal layer of the skin, and additional sources of uncertainty are discussed. The estimate is dependent on the assumption that the target cells for radiation induced skin cancer lie in the basal layer of the epidermis, and that irradiation of the dermis is not necessary for skin cancer induction. Due to the effect of ultraviolet radiation on the risk factors for ionising radiation, ultraviolet radiation exposure must also be involved in the induction of the majority of any skin cancer cases linked to radon exposure. (author)

  8. The Channel Estimation and Modeling in High Altitude Platform Station Wireless Communication Dynamic Network

    Directory of Open Access Journals (Sweden)

    Xiaoyang Liu

    2017-01-01

    Full Text Available In order to analyze the channel estimation performance of near space high altitude platform station (HAPS in wireless communication system, the structure and formation of HAPS are studied in this paper. The traditional Least Squares (LS channel estimation method and Singular Value Decomposition-Linear Minimum Mean-Squared (SVD-LMMS channel estimation method are compared and investigated. A novel channel estimation method and model are proposed. The channel estimation performance of HAPS is studied deeply. The simulation and theoretical analysis results show that the performance of the proposed method is better than the traditional methods. The lower Bit Error Rate (BER and higher Signal Noise Ratio (SNR can be obtained by the proposed method compared with the LS and SVD-LMMS methods.

  9. Are previous episodes of bacterial vaginosis a predictor for vaginal symptoms in breast cancer patients treated with aromatase inhibitors?

    DEFF Research Database (Denmark)

    Gade, Malene R; Goukasian, Irina; Panduro, Nathalie

    2018-01-01

    Objective To estimate the prevalence of vaginal symptoms in postmenopausal women with breast cancer exposed to aromatase inhibitors, and to investigate if the risk of vaginal symptoms is associated with previous episodes of bacterial vaginosis. Methods Patients from Rigshospitalet and Herlev...... University Hospital, Denmark, were identified through the register of Danish Breast Cancer Cooperation Group and 78 patients participated in the study. Semiquantitave questionnaires and telephone interview were used to assess the prevalence of vaginal symptoms and previous episode(s) of bacterial vaginosis....... Multivariable logistic regression models were used to assess the association between vaginal symptoms and previous episodes of bacterial vaginosis. Results Moderate to severe symptoms due to vaginal itching/irritation were experienced by 6.4% (95% CI: 2.8-14.1%), vaginal dryness by 28.4% (95% CI: 19...

  10. A theoretical assessment of antioxidant capacity of flavonoids by means of local hyper–softness

    Directory of Open Access Journals (Sweden)

    Claudia Sandoval-Yañez

    2018-05-01

    Full Text Available A theoretical reactivity descriptor to estimate local reactivity on molecules was tested to assess the antioxidant capability of some flavonoids. It was validated by comparison with experimental precedents published already by Firuzi et al. (2005. The aforementioned reactivity index is called local hyper-softness (LHS. This parameter was applied on HO- substituent groups on the same set of flavonoids within each subclassification: flavones (apigenin and baicalein, flavonols (fisetin, galangin, 3–OH flavone, kaempferol, myricetin, and quercetin, flavanones (hesperetin, naringenin, taxifolin and isoflavones (daidzein and genistein. Experimental values of both techniques, ferric reducing antioxidant power (FRAP and anodic oxidation potential (Eap were retrieved from Firuzi et al. (2005 with the purpose of validating the calculated LHS values. Excepting myricetin, the LHS values of all these compounds matched in a similar order relationship experimentally obtained by means of Eap and FRAP from Firuzi et al. (2005. Our results revealed that LHS is a suitable theoretical parameter to get an insight concerning to the antioxidant capacity of these compounds, in particular, LHS allows explaining experimentally obtained values of FRAP along with Eap values in terms of reactivity of HO- substituent groups belonging these molecules theoretically computed without including experimental parametes. Keywords: Flavonoids, Antioxidant capacity, Ferric reducing antioxidant power, Anodic oxidation potential, Local hypersoftness

  11. Revealing life-history traits by contrasting genetic estimations with predictions of effective population size.

    Science.gov (United States)

    Greenbaum, Gili; Renan, Sharon; Templeton, Alan R; Bouskila, Amos; Saltz, David; Rubenstein, Daniel I; Bar-David, Shirli

    2017-12-22

    Effective population size, a central concept in conservation biology, is now routinely estimated from genetic surveys and can also be theoretically predicted from demographic, life-history, and mating-system data. By evaluating the consistency of theoretical predictions with empirically estimated effective size, insights can be gained regarding life-history characteristics and the relative impact of different life-history traits on genetic drift. These insights can be used to design and inform management strategies aimed at increasing effective population size. We demonstrated this approach by addressing the conservation of a reintroduced population of Asiatic wild ass (Equus hemionus). We estimated the variance effective size (N ev ) from genetic data (N ev =24.3) and formulated predictions for the impacts on N ev of demography, polygyny, female variance in lifetime reproductive success (RS), and heritability of female RS. By contrasting the genetic estimation with theoretical predictions, we found that polygyny was the strongest factor affecting genetic drift because only when accounting for polygyny were predictions consistent with the genetically measured N ev . The comparison of effective-size estimation and predictions indicated that 10.6% of the males mated per generation when heritability of female RS was unaccounted for (polygyny responsible for 81% decrease in N ev ) and 19.5% mated when female RS was accounted for (polygyny responsible for 67% decrease in N ev ). Heritability of female RS also affected N ev ; hf2=0.91 (heritability responsible for 41% decrease in N ev ). The low effective size is of concern, and we suggest that management actions focus on factors identified as strongly affecting Nev, namely, increasing the availability of artificial water sources to increase number of dominant males contributing to the gene pool. This approach, evaluating life-history hypotheses in light of their impact on effective population size, and contrasting

  12. New, national bottom-up estimate for tree-based biological ...

    Science.gov (United States)

    Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating

  13. Parameter identifiability and redundancy: theoretical considerations.

    Directory of Open Access Journals (Sweden)

    Mark P Little

    Full Text Available BACKGROUND: Models for complex biological systems may involve a large number of parameters. It may well be that some of these parameters cannot be derived from observed data via regression techniques. Such parameters are said to be unidentifiable, the remaining parameters being identifiable. Closely related to this idea is that of redundancy, that a set of parameters can be expressed in terms of some smaller set. Before data is analysed it is critical to determine which model parameters are identifiable or redundant to avoid ill-defined and poorly convergent regression. METHODOLOGY/PRINCIPAL FINDINGS: In this paper we outline general considerations on parameter identifiability, and introduce the notion of weak local identifiability and gradient weak local identifiability. These are based on local properties of the likelihood, in particular the rank of the Hessian matrix. We relate these to the notions of parameter identifiability and redundancy previously introduced by Rothenberg (Econometrica 39 (1971 577-591 and Catchpole and Morgan (Biometrika 84 (1997 187-196. Within the widely used exponential family, parameter irredundancy, local identifiability, gradient weak local identifiability and weak local identifiability are shown to be largely equivalent. We consider applications to a recently developed class of cancer models of Little and Wright (Math Biosciences 183 (2003 111-134 and Little et al. (J Theoret Biol 254 (2008 229-238 that generalize a large number of other recently used quasi-biological cancer models. CONCLUSIONS/SIGNIFICANCE: We have shown that the previously developed concepts of parameter local identifiability and redundancy are closely related to the apparently weaker properties of weak local identifiability and gradient weak local identifiability--within the widely used exponential family these concepts largely coincide.

  14. Theoretical behaviorism meets embodied cognition : Two theoretical analyses of behavior

    NARCIS (Netherlands)

    Keijzer, F.A.

    2005-01-01

    This paper aims to do three things: First, to provide a review of John Staddon's book Adaptive dynamics: The theoretical analysis of behavior. Second, to compare Staddon's behaviorist view with current ideas on embodied cognition. Third, to use this comparison to explicate some outlines for a

  15. Doppler Centroid Estimation for Airborne SAR Supported by POS and DEM

    Directory of Open Access Journals (Sweden)

    CHENG Chunquan

    2015-05-01

    Full Text Available It is difficult to estimate the Doppler frequency and modulating rate for airborne SAR by using traditional vector method due to instable flight and complex terrain. In this paper, it is qualitatively analyzed that the impacts of POS, DEM and their errors on airborne SAR Doppler parameters. Then an innovative vector method is presented based on the range-coplanarity equation to estimate the Doppler centroid taking the POS and DEM as auxiliary data. The effectiveness of the proposed method is validated and analyzed via the simulation experiments. The theoretical analysis and experimental results show that the method can be used to estimate the Doppler centroid with high accuracy even in the cases of high relief, instable flight, and large squint SAR.

  16. Root-MUSIC Based Angle Estimation for MIMO Radar with Unknown Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Jianfeng Li

    2014-01-01

    Full Text Available Direction of arrival (DOA estimation problem for multiple-input multiple-output (MIMO radar with unknown mutual coupling is studied, and an algorithm for the DOA estimation based on root multiple signal classification (MUSIC is proposed. Firstly, according to the Toeplitz structure of the mutual coupling matrix, output data of some specified sensors are selected to eliminate the influence of the mutual coupling. Then the reduced-dimension transformation is applied to make the computation burden lower as well as obtain a Vandermonde structure of the direction matrix. Finally, Root-MUSIC can be adopted for the angle estimation. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT-like algorithm and MUSIC-like algorithm. Furthermore, the proposed algorithm has lower complexity than them. The simulation results verify the effectiveness of the algorithm, and the theoretical estimation error of the algorithm is also derived.

  17. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.

    Science.gov (United States)

    Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael

    2018-03-02

    The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. © 2018 Cognitive Science Society, Inc.

  18. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.

    Science.gov (United States)

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-02-13

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.

  19. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.

    2012-01-01

    An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)

  20. Mortality estimate of Chinese mystery snail, Bellamya chinensis (Reeve, 1863) in a Nebraska reservoir

    Science.gov (United States)

    Haak, Danielle M.; Chaine, Noelle M.; Stephen, Bruce J.; Wong, Alec; Allen, Craig R.

    2013-01-01

    The Chinese mystery snail (Bellamya chinensis) is an aquatic invasive species found throughout the USA. Little is known about this species’ life history or ecology, and only one population estimate has been published, for Wild Plum Lake in southeast Nebraska. A recent die-off event occurred at this same reservoir and we present a mortality estimate for this B. chinensis population using a quadrat approach. Assuming uniform distribution throughout the newly-exposed lake bed (20,900 m2), we estimate 42,845 individuals died during this event, amounting to approximately 17% of the previously-estimated population size of 253,570. Assuming uniform distribution throughout all previously-reported available habitat (48,525 m2), we estimate 99,476 individuals died, comprising 39% of the previously-reported adult population. The die-off occurred during an extreme drought event, which was coincident with abnormally hot weather. However, the exact reason of the die-off is still unclear. More monitoring of the population dynamics of B. chinensis is necessary to further our understanding of this species’ ecology.

  1. Theoretical and Experimental Investigation of Particle Trapping via Acoustic Bubbles

    Science.gov (United States)

    Chen, Yun; Fang, Zecong; Merritt, Brett; Saadat-Moghaddam, Darius; Strack, Dillon; Xu, Jie; Lee, Sungyon

    2014-11-01

    One important application of lab-on-a-chip devices is the trapping and sorting of micro-objects, with acoustic bubbles emerging as an effective, non-contact method. Acoustically actuated bubbles are known to exert a secondary radiation force on micro-particles and trap them, when this radiation force exceeds the drag force that acts to keep the particles in motion. In this study, we theoretically evaluate the magnitudes of these two forces for varying actuation frequencies and voltages. In particular, the secondary radiation force is calculated directly from bubble oscillation shapes that have been experimentally measured for varying acoustic parameters. Finally, based on the force estimates, we predict the threshold voltage and frequency for trapping and compare them to the experimental results.

  2. Event-triggered sensor data transmission policy for receding horizon recursive state estimation

    Directory of Open Access Journals (Sweden)

    Yunji Li

    2017-06-01

    Full Text Available We consider a sensor data transmission policy for receding horizon recursive state estimation in a networked linear system. A good tradeoff between estimation error and communication rate could be achieved according to a transmission strategy, which decides the transfer time of the data packet. Here we give this transmission policy through proving the upper bound of system performance. Moreover, the lower bound of system performance is further analyzed in detail. A numerical example is given to verify the potential and effectiveness of the theoretical results.

  3. Theoretical Computer Science

    DEFF Research Database (Denmark)

    2002-01-01

    The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...

  4. Theoretical physics division

    International Nuclear Information System (INIS)

    Anon.

    The studies in 1977 are reviewed. In theoretical nuclear physics: nuclear structure, nuclear reactions, intermediate energy physics; in elementary particle physics: field theory, strong interactions dynamics, nucleon-nucleon interactions, new particles, current algebra, symmetries and quarks are studied [fr

  5. Schroedinger operators - geometric estimates in terms of the occupation time

    International Nuclear Information System (INIS)

    Demuth, M.; Kirsch, W.; McGillivray, I.

    1995-01-01

    The difference of Schroedinger and Dirichlet semigroups is expressed in terms of the Laplace transform of the Brownian motion occupation time. This implies quantitative upper and lower bounds for the operator norms of the corresponding resolvent differences. One spectral theoretical consequence is an estimate for the eigenfunction for a Schroedinger operator in a ball where the potential is given as a cone indicator function. 12 refs

  6. Signal recognition and parameter estimation of BPSK-LFM combined modulation

    Science.gov (United States)

    Long, Chao; Zhang, Lin; Liu, Yu

    2015-07-01

    Intra-pulse analysis plays an important role in electronic warfare. Intra-pulse feature abstraction focuses on primary parameters such as instantaneous frequency, modulation, and symbol rate. In this paper, automatic modulation recognition and feature extraction for combined BPSK-LFM modulation signals based on decision theoretic approach is studied. The simulation results show good recognition effect and high estimation precision, and the system is easy to be realized.

  7. Exploration of deep S-wave velocity structure using microtremor array technique to estimate long-period ground motion

    International Nuclear Information System (INIS)

    Sato, Hiroaki; Higashi, Sadanori; Sato, Kiyotaka

    2007-01-01

    In this study, microtremor array measurements were conducted at 9 sites in the Niigata plain to explore deep S-wave velocity structures for estimation of long-period earthquake ground motion. The 1D S-wave velocity profiles in the Niigata plain are characterized by 5 layers with S-wave velocities of 0.4, 0.8, 1.5, 2.1 and 3.0 km/s, respectively. The depth to the basement layer is deeper in the Niigata port area located at the Japan sea side of the Niigata plain. In this area, the basement depth is about 4.8 km around the Seirou town and about 4.1 km around the Niigata city, respectively. These features about the basement depth in the Niigata plain are consistent with the previous surveys. In order to verify the profiles derived from microtremor array exploration, we estimate the group velocities of Love wave for four propagation paths of long-period earthquake ground motion during Niigata-ken tyuetsu earthquake by multiple filter technique, which were compared with the theoretical ones calculated from the derived profiles. As a result, it was confirmed that the group velocities from the derived profiles were in good agreement with the ones from long-period earthquake ground motion records during Niigata-ken tyuetsu earthquake. Furthermore, we applied the estimation method of design basis earthquake input for seismically isolated nuclear power facilities by using normal mode solution to estimate long-period earthquake ground motion during Niigata-ken tyuetsu earthquake. As a result, it was demonstrated that the applicability of the above method for the estimation of long-period earthquake ground motion were improved by using the derived 1D S-wave velocity profile. (author)

  8. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro

    2013-01-01

    An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)

  9. Department of Theoretical Physics - Overview

    International Nuclear Information System (INIS)

    Kwiecinski, J.

    1999-01-01

    Full text: Research activity of the Department of Theoretical Physics concerns theoretical high-energy and elementary particle physics, intermediate energy particle physics, theoretical nuclear physics, theory of nuclear matter, theory of quark-gluon plasma and of relativistic heavy-ion collisions, theoretical astrophysics and general physics. There is some emphasis on the phenomenological applications of the theoretical research yet the more formal problems are also considered. The detailed summary of the research projects and of the results obtained in various fields is given in the abstracts. Our Department actively collaborates with other Departments of the Institute as well as with several scientific institutions both in Poland and abroad. In particular members of our Department participate in the EC network which allows mobility of researchers. Several members of our Department have also participated in the research projects funded by the Polish Committee for Scientific Research (KBN). The complete list of grants is listed separately. Besides pure research, members of our Department are also involved in graduate and undergraduate teaching activity both at our Institute as well as at other academic institutions in Cracow. At present five PhD students are working for their degree under supervision of the senior members from the Department. In the last year we have completed our active participation in the educational TEMPUS programme funded by the European Communities. This programme has in particular allowed exchange of students between our Department and the Department of Physics of the University of Durham in the United Kingdom. In 1998 we joined the SOCRATES - ERASMUS project which will make it possible to continue this exchange. (author)

  10. Theoretical analysis of the axial growth of nanowires starting with a binary eutectic droplet via vapor-liquid-solid mechanism

    Science.gov (United States)

    Liu, Qing; Li, Hejun; Zhang, Yulei; Zhao, Zhigang

    2018-06-01

    A series of theoretical analysis is carried out for the axial vapor-liquid-solid (VLS) growth of nanowires starting with a binary eutectic droplet. The growth model considering the entire process of axial VLS growth is a development of the approaches already developed by previous studies. In this model, the steady and unsteady state growth are considered both. The amount of solute species in a variable liquid droplet, the nanowire length, radius, growth rate and all other parameters during the entire axial growth process are treated as functions of growth time. The model provides theoretical predictions for the formation of nanowire shape, the length-radius and growth rate-radius dependences. It is also suggested by the model that the initial growth of single nanowire is significantly affected by Gibbs-Thompson effect due to the shape change. The model was applied on predictions of available experimental data of Si and Ge nanowires grown from Au-Si and Au-Ge systems respectively reported by other works. The calculations with the proposed model are in satisfactory agreement with the experimental results of the previous works.

  11. Degenerated-Inverse-Matrix-Based Channel Estimation for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Makoto Yoshida

    2009-01-01

    Full Text Available This paper addresses time-domain channel estimation for pilot-symbol-aided orthogonal frequency division multiplexing (OFDM systems. By using a cyclic sinc-function matrix uniquely determined by Nc transmitted subcarriers, the performance of our proposed scheme approaches perfect channel state information (CSI, within a maximum of 0.4 dB degradation, regardless of the delay spread of the channel, Doppler frequency, and subcarrier modulation. Furthermore, reducing the matrix size by splitting the dispersive channel impulse response into clusters means that the degenerated inverse matrix estimator (DIME is feasible for broadband, high-quality OFDM transmission systems. In addition to theoretical analysis on normalized mean squared error (NMSE performance of DIME, computer simulations over realistic nonsample spaced channels also showed that the DIME is robust for intersymbol interference (ISI channels and fast time-invariant channels where a minimum mean squared error (MMSE estimator does not work well.

  12. An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings

    Directory of Open Access Journals (Sweden)

    Siaw-Teng Liaw

    2014-10-01

    Full Text Available Introduction Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework.Methods We searched PubMed, Medline, Web of Science, ABI Inform (Proquest and Business Source Premier (EBSCO using the terms curation, information ecosystem, data quality management (DQM, data governance, information governance (IG and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise.Findings There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly ‘big-data’ environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle.Conclusions The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.

  13. Merged ontology for engineering design: Contrasting empirical and theoretical approaches to develop engineering ontologies

    DEFF Research Database (Denmark)

    Ahmed, Saeema; Storga, M

    2009-01-01

    to developing the ontology engineering design integrated taxonomies (EDIT) with a theoretical approach in which concepts and relations are elicited from engineering design theories ontology (DO) The limitations and advantages of each approach are discussed. The research methodology adopted is to map......This paper presents a comparison of two previous and separate efforts to develop an ontology in the engineering design domain, together with an ontology proposal from which ontologies for a specific application may be derived. The research contrasts an empirical, user-centered approach...

  14. Estimation of petroleum assets: contribution of the options theory

    International Nuclear Information System (INIS)

    Chesney, M.

    1999-01-01

    With the development of data on real options, the theoretical and practical research on projects estimation has advanced considerably. The analogy between investment advisabilities and options allow to take into account the flexibility which is inherent in any project as well as the choices which are available to investors. The advantages of this approach are shown in this paper. An example of application in the field of petroleum industry is given. (O.M.)

  15. Estimating the elasticity of trade: the trade share approach

    OpenAIRE

    Mauro Lanati

    2013-01-01

    Recent theoretical work on international trade emphasizes the importance of trade elasticity as the fundamental statistic needed to conduct welfare analysis. Eaton and Kortum (2002) proposed a two-step method to estimate this parameter, where exporter fixed effects are regressed on proxies for technology and wages. Within the same Ricardian model of trade, the trade share provides an alternative source of identication for the elasticity of trade. Following Santos Silva and Tenreyro (2006) bot...

  16. Estimation of High-Frequency Earth-Space Radio Wave Signals via Ground-Based Polarimetric Radar Observations

    Science.gov (United States)

    Bolen, Steve; Chandrasekar, V.

    2002-01-01

    Expanding human presence in space, and enabling the commercialization of this frontier, is part of the strategic goals for NASA's Human Exploration and Development of Space (HEDS) enterprise. Future near-Earth and planetary missions will support the use of high-frequency Earth-space communication systems. Additionally, increased commercial demand on low-frequency Earth-space links in the S- and C-band spectra have led to increased interest in the use of higher frequencies in regions like Ku and Ka-band. Attenuation of high-frequency signals, due to a precipitating medium, can be quite severe and can cause considerable disruptions in a communications link that traverses such a medium. Previously, ground radar measurements were made along the Earth-space path and compared to satellite beacon data that was transmitted to a ground station. In this paper, quantitative estimation of the attenuation along the propagation path is made via inter-comparisons of radar data taken from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) and ground-based polarimetric radar observations. Theoretical relationships between the expected specific attenuation (k) of spaceborne measurements with ground-based measurements of reflectivity (Zh) and differential propagation phase shift (Kdp) are developed for various hydrometeors that could be present along the propagation path, which are used to estimate the two-way path-integrated attenuation (PIA) on the PR return echo. Resolution volume matching and alignment of the radar systems is performed, and a direct comparison of PR return echo with ground radar attenuation estimates is made directly on a beam-by-beam basis. The technique is validated using data collected from the TExas and Florida UNderflights (TEFLUN-B) experiment and the TRMM large Biosphere-Atmosphere experiment in Amazonia (LBA) campaign. Attenuation estimation derived from this method can be used for strategiC planning of communication systems for

  17. Internal Medicine residents use heuristics to estimate disease probability

    Directory of Open Access Journals (Sweden)

    Sen Phang

    2015-12-01

    Conclusions: Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.

  18. A novel flux estimator based on SOGI with FLL for induction machine drives

    DEFF Research Database (Denmark)

    Zhao, Rende; Xin, Zhen; Loh, Poh Chiang

    2016-01-01

    by the initial conditions with no need for the magnitude and phase compensation. Because the dc and harmonic components are inversely proportional to the speed in the estimated flux, the performance of the single SOGI-based estimator become worse at low speed. A multiple SOGI-based flux estimator is the proposed......It is very important to estimate flux accurately in implementing high-performance control of AC motors. Theoretical analysis has been made to illustrate the performance of the pure-integration-based and the Low-Pass Filter (LPF) based flux estimators. A novel flux estimator based on Second......-Order General Integrator (SOGI) with Frequency Locked-Loop (FLL) is investigated in this paper for induction machine drives. A single SOGI instead of pure integrator or LPF is used to integrate the back electromotive force (EMF). It can solve the problems of the integration saturation and the dc drift caused...

  19. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Zhang, Xiangliang

    2016-01-01

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  20. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-11-08

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  1. Sibutramine characterization and solubility, a theoretical study

    Science.gov (United States)

    Aceves-Hernández, Juan M.; Nicolás Vázquez, Inés; Hinojosa-Torres, Jaime; Penieres Carrillo, Guillermo; Arroyo Razo, Gabriel; Miranda Ruvalcaba, René

    2013-04-01

    Solubility data from sibutramine (SBA) in a family of alcohols were obtained at different temperatures. Sibutramine was characterized by using thermal analysis and X-ray diffraction technique. Solubility data were obtained by the saturation method. The van't Hoff equation was used to obtain the theoretical solubility values and the ideal solvent activity coefficient. No polymorphic phenomena were found from the X-ray diffraction analysis, even though this compound is a racemic mixture of (+) and (-) enantiomers. Theoretical calculations showed that the polarisable continuum model was able to reproduce the solubility and stability of sibutramine molecule in gas phase, water and a family of alcohols at B3LYP/6-311++G (d,p) level of theory. Dielectric constant, dipolar moment and solubility in water values as physical parameters were used in those theoretical calculations for explaining that behavior. Experimental and theoretical results were compared and good agreement was obtained. Sibutramine solubility increased from methanol to 1-octanol in theoretical and experimental results.

  2. Estimation of potential uranium resources

    International Nuclear Information System (INIS)

    Curry, D.L.

    1977-09-01

    Potential estimates, like reserves, are limited by the information on hand at the time and are not intended to indicate the ultimate resources. Potential estimates are based on geologic judgement, so their reliability is dependent on the quality and extent of geologic knowledge. Reliability differs for each of the three potential resource classes. It is greatest for probable potential resources because of the greater knowledge base resulting from the advanced stage of exploration and development in established producing districts where most of the resources in this class are located. Reliability is least for speculative potential resources because no significant deposits are known, and favorability is inferred from limited geologic data. Estimates of potential resources are revised as new geologic concepts are postulated, as new types of uranium ore bodies are discovered, and as improved geophysical and geochemical techniques are developed and applied. Advances in technology that permit the exploitation of deep or low-grade deposits, or the processing of ores of previously uneconomic metallurgical types, also will affect the estimates

  3. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  4. Effect size estimates: current use, calculations, and interpretation.

    Science.gov (United States)

    Fritz, Catherine O; Morris, Peter E; Richler, Jennifer J

    2012-02-01

    The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.

  5. Prediction of the theoretical capacity of non-aqueous lithium-air batteries

    International Nuclear Information System (INIS)

    Tan, Peng; Wei, Zhaohuan; Shyy, W.; Zhao, T.S.

    2013-01-01

    Highlights: • The theoretical capacity of non-aqueous lithium-air batteries is predicted. • Key battery design parameters are defined and considered. • The theoretical battery capacity is about 10% of the lithium capacity. • The battery mass and volume changes after discharge are also studied. - Abstract: In attempt to realistically assess the high-capacity feature of emerging lithium-air batteries, a model is developed for predicting the theoretical capacity of non-aqueous lithium-air batteries. Unlike previous models that were formulated by assuming that the active materials and electrolyte are perfectly balanced according to the electrochemical reaction, the present model takes account of the fraction of the reaction products (Li 2 O 2 and Li 2 O), the utilization of the onboard lithium metal, the utilization of the void volume of the porous cathode, and the onboard excess electrolyte. Results show that the gravimetric capacity increases from 1033 to 1334 mA h/g when the reaction product varies from pure Li 2 O 2 to pure Li 2 O. It is further demonstrated that the capacity declines drastically from 1080 to 307 mA h/g when the case of full utilization of the onboard lithium is altered to that only 10% of the metal is utilized. Similarly, the capacity declines from 1080 to 144 mA h/g when the case of full occupation of the cathode void volume by the reaction products is varied to that only 10% of the void volume is occupied. In general, the theoretical gravimetric capacity of typical non-aqueous lithium-air batteries falls in the range of 380–450 mA h/g, which is about 10–12% of the gravimetric capacity calculated based on the energy density of the lithium metal. The present model also facilitates the study of the effects of different parameters on the mass and volume change of non-aqueous lithium-air batteries

  6. Sinusoidal Order Estimation Using Angles between Subspaces

    Directory of Open Access Journals (Sweden)

    Søren Holdt Jensen

    2009-01-01

    Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.

  7. Estimating correlation and covariance matrices by weighting of market similarity

    OpenAIRE

    Michael C. M\\"unnix; Rudi Sch\\"afer; Oliver Grothe

    2010-01-01

    We discuss a weighted estimation of correlation and covariance matrices from historical financial data. To this end, we introduce a weighting scheme that accounts for similarity of previous market conditions to the present one. The resulting estimators are less biased and show lower variance than either unweighted or exponentially weighted estimators. The weighting scheme is based on a similarity measure which compares the current correlation structure of the market to the structures at past ...

  8. Medication competency of nurses according to theoretical and drug calculation online exams: A descriptive correlational study.

    Science.gov (United States)

    Sneck, Sami; Saarnio, Reetta; Isola, Arja; Boigu, Risto

    2016-01-01

    Medication administration is an important task of registered nurses. According to previous studies, nurses lack theoretical knowledge and drug calculation skills and knowledge-based mistakes do occur in clinical practice. Finnish health care organizations started to develop a systematic verification processes for medication competence at the end of the last decade. No studies have yet been made of nurses' theoretical knowledge and drug calculation skills according to these online exams. The aim of this study was to describe the medication competence of Finnish nurses according to theoretical and drug calculation exams. A descriptive correlation design was adopted. Participants and settings All nurses who participated in the online exam in three Finnish hospitals between 1.1.2009 and 31.05.2014 were selected to the study (n=2479). Quantitative methods like Pearson's chi-squared tests, analysis of variance (ANOVA) with post hoc Tukey tests and Pearson's correlation coefficient were used to test the existence of relationships between dependent and independent variables. The majority of nurses mastered the theoretical knowledge needed in medication administration, but 5% of the nurses struggled with passing the drug calculation exam. Theoretical knowledge and drug calculation skills were better in acute care units than in the other units and younger nurses achieved better results in both exams than their older colleagues. The differences found in this study were statistically significant, but not high. Nevertheless, even the tiniest deficiency in theoretical knowledge and drug calculation skills should be focused on. It is important to identify the nurses who struggle in the exams and to plan targeted educational interventions for supporting them. The next step is to study if verification of medication competence has an effect on patient safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. [The contribution of persuasion social psychology to the retention of donors: the impact of labelling the previous donation].

    Science.gov (United States)

    Callé, N; Plainfossé, C; Georget, P; Sénémeaud, C; Rasonglès, P

    2011-12-01

    The supply of blood cell products requires from the National French Blood Institute (Établissement Français du Sang - EFS) to rely upon regular blood donors. Contact with donors, tailored to individuals as much as possible, helps them to donate on a regular basis. Within the context of a research program conducted with the Psychology Department of the Université de Caen Basse-Normandie, persuasive theoretical models from social psychology have been tested. These models allow adapting messages according to the motivation of donors. The content is centred on the previous donation, differently labelled according to two types of labelling: functional labelling and social labelling. Functional labelling points out the efficiency of what "has been done" (the previous blood donation), whereas social labelling emphasizes the social value of the individual. Different types of mailing invitations have been sent to 1917 donors from the Normandy database, invited to three different blood collections. Every experimental letter worked better than the standard EFS letter (which was used as the "control" letter) in terms of effective blood donation after reception of the letter. Some of the letters are more efficient in motivating donors than other ones. The letters labelling the previous blood donation as functional (efficiency of the donation) appeared more efficient than those with social label (social value) in whichever motivation induced. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  10. Some experimental and theoretical aspects of the surface impedance in bulk high-temperature superconductors at 10 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Deville, A.; Fawaz, H.; Gaillard, B. (Lab. d' Electronique des Milieux Condenses, Univ. de Provence, Centre Saint-Jerome, 13 - Marseille (France)); Noel, H.; Potel, M. (Lab. de Chimie Minerale, Univ. de Rennes, 35 (France)); Monnereau, O. (Lab. de Chimie des Materiaux, Univ. de Provence, Centre Saint-Charles, 13 - Marseille (France))

    1992-07-15

    After a presentation of the theoretical framework, and a short review of existing r.f. results in high-temperature superconductors, we present our own 10 GHz measurements on YBa[sub 2]Cu[sub 3]O[sub 7] single crystals and sintered samples. We use an electron spin resonance (ESR) heterodyne spectrometer and measurements of the reflection coefficient and resonant frequency of a cavity to get information not only on the surface resistance, R[sub S], as generally done, but also on the surface reactance, X[sub S]. The theoretical analysis of the frequency shift is made by adapting Slater's perturbation method to the present problem. In order to explain both the R[sub s] and X[sub S] results, and previous ESR observations, while keeping theoretical simplicity, we are led to suggest that in the normal state these materials show an unconventional skin effect, where the phenomenon is governed not by the d.c. conductivity, but by an effective (lower) conductivity, the other characteristics being unchanged. We briefly discuss the superconductor results and the validity of the two-fluid model. (orig.).

  11. 49 CFR 173.23 - Previously authorized packaging.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Previously authorized packaging. 173.23 Section... REQUIREMENTS FOR SHIPMENTS AND PACKAGINGS Preparation of Hazardous Materials for Transportation § 173.23 Previously authorized packaging. (a) When the regulations specify a packaging with a specification marking...

  12. Towards understanding international migration determinants today: Theoretical perspective

    Directory of Open Access Journals (Sweden)

    Predojević-Despić Jelena

    2010-01-01

    Full Text Available In times of global migration flows and ever increasing mobility of the workforce in the world, the necessity for constant deepening of theoretical knowledge is imposed as a basis for understanding main determinants of this phenomenon, and with an aim of directing the focus of migration researches towards more efficient overcoming of challenges and making use of the advantages which international migrations could bring both to origin, destination and transit countries. The main goal of this paper is to give a critical review on the development of the economic migrations theory, to state the main similarities and differences between various approaches and to point out to the main drawbacks and problems which the theoretical perspective is facing when studying the determinants of contemporary international labor migrations. The focus of the study refers to voluntary labor migrations with reference to migrations of the highly educated population, while the stress is on economic theories, although some of them are closely connected to sociological, geographical and anthropological theories. The development of the theory on international migrations has been started by micro theoretical models, namely, through the conceptualization of theories which place the individual in the focal point of research, who estimates the positive, namely negative sides of moving from one location to another. Economic models on the micro theoretical level cede more space to models of macro structure which research the social and economic structure within and between countries. There are many theoretical models which offer possible answers to the question on what are the main determinants of international migrations on the macro analytical level. Although every one of them tries to give an answer to the same question, they use different concepts, assumptions and frameworks of research. The reasons which bring about the initiation of international migrations can be

  13. Theoretical atomic physics

    CERN Document Server

    Friedrich, Harald

    2017-01-01

    This expanded and updated well-established textbook contains an advanced presentation of quantum mechanics adapted to the requirements of modern atomic physics. It includes topics of current interest such as semiclassical theory, chaos, atom optics and Bose-Einstein condensation in atomic gases. In order to facilitate the consolidation of the material covered, various problems are included, together with complete solutions. The emphasis on theory enables the reader to appreciate the fundamental assumptions underlying standard theoretical constructs and to embark on independent research projects. The fourth edition of Theoretical Atomic Physics contains an updated treatment of the sections involving scattering theory and near-threshold phenomena manifest in the behaviour of cold atoms (and molecules). Special attention is given to the quantization of weakly bound states just below the continuum threshold and to low-energy scattering and quantum reflection just above. Particular emphasis is laid on the fundamen...

  14. Optimal information transfer in enzymatic networks: A field theoretic formulation

    Science.gov (United States)

    Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.

    2017-07-01

    Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in

  15. Experimental and theoretical studies of buoyant-thermo capillary flow

    International Nuclear Information System (INIS)

    Favre, E.; Blumenfeld, L.; Soubbaramayer

    1996-01-01

    In the AVLIS process, uranium metal is evaporated using a high power electron gun. We have prior discussed the power balance equation in the electron beam evaporation process and pointed out, among the loss terms, the importance of the power loss due to the convective flow in the molten pool driven by buoyancy and thermo capillarity. An empirical formula has been derived from model experiments with cerium, to estimate the latter power loss and that formula can be used practically in engineering calculations. In order to complete the empirical approach, a more fundamental research program of theoretical and experimental studies have been carried out in Cea-France, with the objective of understanding the basic phenomena (heat transport, flow instabilities, turbulence, etc.) occurring in a convective flow in a liquid layer locally heated on its free surface

  16. 22 CFR 40.91 - Certain aliens previously removed.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...

  17. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  18. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  19. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Fredriksson, Anders; Olofsson, Isabelle [Golder Associates AB, Uppsala (Sweden)

    2005-12-15

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined.

  20. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    International Nuclear Information System (INIS)

    Fredriksson, Anders; Olofsson, Isabelle

    2005-12-01

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined