WorldWideScience

Sample records for empirical lcao parameters

  1. Electronic structure of crystalline uranium nitrides UN, U2N3 and UN2: LCAO calculations with the basis set optimization

    International Nuclear Information System (INIS)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V

    2008-01-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U 2 N 3 and UN 2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U 2 N 3 crystals; UN 2 crystal has the semiconducting nature

  2. Quantum Chemistry of Solids LCAO Treatment of Crystals and Nanostructures

    CERN Document Server

    Evarestov, Robert A

    2012-01-01

    Quantum Chemistry of Solids delivers a comprehensive account of the main features and possibilities of LCAO methods for the first principles calculations of electronic structure of periodic systems. The first part describes the basic theory underlying the LCAO methods  applied to periodic systems and the use of Hartree-Fock(HF), Density Function theory(DFT) and hybrid Hamiltonians. The translation and site symmetry consideration is included to establish connection between k-space solid –state physics and real-space quantum chemistry. The inclusion of electron correlation effects for periodic systems is considered on the basis of localized crystalline orbitals. The possibilities of LCAO methods for chemical bonding analysis in periodic systems are discussed. The second part deals with the applications of LCAO methods  for calculations of bulk crystal properties, including magnetic ordering and crystal structure optimization.  In the second edition two new chapters are added in the application part II of t...

  3. DFT LCAO and plane wave calculations of SrZrO3

    International Nuclear Information System (INIS)

    Evarestov, R.A.; Bandura, A.V.; Alexandrov, V.E.; Kotomin, E.A.

    2005-01-01

    The results of the density functional (DFT) LCAO and plane wave (PW) calculations of the electronic and structural properties of four known SrZrO 3 phases (Pm3m, I4/mcm, Cmcm and Pbnm) are presented and discussed. The calculated unit cell energies and relative stability of these phases agree well with the experimental sequence of SrZrO 3 phases as the temperature increases. The lattice structure parameters optimized in the PW calculations for all four phases are in good agreement with the experimental neutron diffraction data. The LCAO and PW results for the electronic structure, density of states and chemical bonding in the cubic phase (Pm3m) are discussed in detail and compared with the results of previous PW calculations. (copyright 2005 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  4. Quantum Chemistry of Solids The LCAO First Principles Treatment of Crystals

    CERN Document Server

    Evarestov, Robert A

    2007-01-01

    Quantum Chemistry of Solids delivers a comprehensive account of the main features and possibilities of LCAO methods for the first principles calculations of electronic structure of periodic systems. The first part describes the basic theory underlying the LCAO methods applied to periodic systems and the use of wave-function-based (Hartree-Fock), density-based (DFT) and hybrid hamiltonians. The translation and site symmetry consideration is included to establish connection between k-space solid-state physics and real-space quantum chemistry methods in the framework of cyclic model of an infinite crystal. The inclusion of electron correlation effects for periodic systems is considered on the basis of localized crystalline orbitals. The possibilities of LCAO methods for chemical bonding analysis in periodic systems are discussed. The second part deals with the applications of LCAO methods for calculations of bulk crystal properties, including magnetic ordering and crystal structure optimization. The discussion o...

  5. DFT LCAO and plane wave calculations of SrZrO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, R.A.; Bandura, A.V.; Alexandrov, V.E. [Department of Quantum Chemistry, St. Petersburg State University, 26 Universitetskiy Prospekt, Stary Peterhof 198504 (Russian Federation); Kotomin, E.A. [Max-Planck-Institut fuer Festkoerperforschung, Heisenbergstr. 1, 70569, Stuttgart (Germany)

    2005-02-01

    The results of the density functional (DFT) LCAO and plane wave (PW) calculations of the electronic and structural properties of four known SrZrO{sub 3} phases (Pm3m, I4/mcm, Cmcm and Pbnm) are presented and discussed. The calculated unit cell energies and relative stability of these phases agree well with the experimental sequence of SrZrO{sub 3} phases as the temperature increases. The lattice structure parameters optimized in the PW calculations for all four phases are in good agreement with the experimental neutron diffraction data. The LCAO and PW results for the electronic structure, density of states and chemical bonding in the cubic phase (Pm3m) are discussed in detail and compared with the results of previous PW calculations. (copyright 2005 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  6. Electronic structure of crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2}: LCAO calculations with the basis set optimization

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V [Department of Quantum Chemistry, St. Petersburg State University, University Prospect 26, Stary Peterghof, St. Petersburg, 198504 (Russian Federation)], E-mail: re1973@re1973.spb.edu

    2008-06-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2} are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U{sub 2}N{sub 3} crystals; UN{sub 2} crystal has the semiconducting nature.

  7. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    Science.gov (United States)

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  8. Extended Fenske-Hall LCAO MO Calculations for Mixed Methylene Dihalides

    Science.gov (United States)

    Ziemann, Hartmut; Paulun, Manfred

    1988-10-01

    The electronic structure of mixed methylene dihalides CH2XY (X, Y = F, Cl, Br. I) has been studied using extended Fenske-Hall LCAO MO method. The comparison with available photoelec­tron spectra confirmes previous assignments of all bands with binding energies <100 eV. The electronic structure changes occurring upon varying the halogen substituents are discussed.

  9. LCAO fitting of positron 2D-ACAR momentum densities of non-metallic solids

    International Nuclear Information System (INIS)

    Chiba, T.

    2001-01-01

    We present a least-squares fitting method to fit and analyze momentum densities obtained by 2D-ACAR. The method uses an LCAO-MO as a fitting basis and thus is applicable to non-metals. Here we illustrate the method by taking MgO as an example. (orig.)

  10. LCAO fitting of positron 2D-ACAR momentum densities of non-metallic solids

    Energy Technology Data Exchange (ETDEWEB)

    Chiba, T. [National Inst. for Research in Inorganic Materials, Tsukuba, Ibaraki (Japan)

    2001-07-01

    We present a least-squares fitting method to fit and analyze momentum densities obtained by 2D-ACAR. The method uses an LCAO-MO as a fitting basis and thus is applicable to non-metals. Here we illustrate the method by taking MgO as an example. (orig.)

  11. Electronic properties of mixed molybdenum dichalcogenide MoTeSe: LCAO calculations and Compton spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ahuja, Ushma [Department of Electrical Engineering, Veermata Jijabai Technological Institute, H. R. Mahajani Marg, Matunga (East), Mumbai 400019, Maharashtra (India); Kumar, Kishor; Joshi, Ritu [Department of Physics, University College of Science, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India); Bhavsar, D.N. [Department of Physics, Bhavan' s Seth R.A. College of Science, Khanpur, Ahmedabad 380001, Gujarat (India); Heda, N.L., E-mail: nlheda@yahoo.co.in [Department of Pure and Applied Physics, University of Kota, Kota 324007, Rajasthan (India)

    2016-07-01

    We have employed linear combination of atomic orbitals (LCAO) method to compute the Mulliken’s population (MP), energy bands, density of states (DOS) and Compton profiles for hexagonal MoTeSe. The density functional theory (DFT) and hybridization of Hartree-Fock with DFT (B3LYP) have been used within the LCAO approximation. Performance of theoretical models has been tested by comparing the theoretical momentum densities with the experimental Compton profile of MoTeSe measured using {sup 137}Cs Compton spectrometer. It is seen that the B3LYP prescription gives a better agreement with the experimental data than other DFT based approximations. The energy bands and DOS depict an indirect band gap character in MoTeSe. In addition, a relative nature of bonding in MoTeSe and its isovalent MoTe{sub 2} is discussed in terms of equal-valence-electron-density (EVED) profiles. On the basis of EVED profiles it is seen that MoTeSe is more covalent than MoTe{sub 2}.

  12. An extension of the fenske-hall LCAO method for approximate calculations of inner-shell binding energies of molecules

    Science.gov (United States)

    Zwanziger, Ch.; Reinhold, J.

    1980-02-01

    The approximate LCAO MO method of Fenske and Hall has been extended to an all-election method allowing the calculation of inner-shell binding energies of molecules and their chemical shifts. Preliminary results are given.

  13. Extended Fenske-Hall LCAO MO calculations of core-level shifts in solid P compounds

    Science.gov (United States)

    Franke, R.; Chassé, T.; Reinhold, J.; Streubel, P.; Szargan, R.

    1997-08-01

    Extended Fenske-Hall LCAO-MO ΔSCF calculations on solids modelled as H-pseudoatom saturated clusters are reported. The computational results verify the experimentally obtained initial-state (effective atomic charges, Madelung potential) and relaxation-energy contributions to the XPS phosphorus core-level binding energy shifts measured in Na 3PO 3S, Na 3PO 4, Na 2PO 3F and NH 4PF 6 in reference to red phosphorus. It is shown that the different initial-state contributions observed in the studied phosphates are determined by local and nonlocal terms while the relaxation-energy contributions are mainly dependent on the nature of the nearest neighbors of the phosphorus atom.

  14. LCAO calculations of SrTiO3 nanotubes

    International Nuclear Information System (INIS)

    Evarestov, Robert; Bandura, Andrei

    2011-01-01

    The large-scale first-principles simulation of the structure and stability of SrTiO 3 nanotubes is performed for the first time using the periodic PBE0 LCAO method. The initial structures of the nanotubes have been obtained by the rolling up of the stoichiometric SrTiO 3 slabs consisting of two or four alternating (001) SrO and TiO 2 atomic planes. Nanotubes (NTs) with chiralities (n,0) and (n,n) have been studied. Two different NTs were constructed for each chirality: (I) with SrO outer shell, and (II) with TiO 2 outer shell. Positions of all atoms have been optimized to obtain the most stable NT structure . In the majority of considered cases the inner or outer TiO 2 shells of NT undergo a considerable reconstruction due to shrinkage or stretching of interatomic distances in the initial cubic perovskite structure. There were found two types of surface reconstruction: (1) breaking of Ti-O bonds with creating of Ti = O titanyl groups in outer surface; (2) inner surface folding due to Ti-O-Ti bending. Based on strain energy calculations the largest stability was found for (n,0) NTs with TiO 2 outer shell.

  15. LCAO calculations of SrTiO{sub 3} nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, Robert; Bandura, Andrei, E-mail: re1973@re1973.spb.edu [Department of Quantum Chemistry, St. Petersburg State University, 26 Universitetsky Ave., 198504, Petrodvorets (Russian Federation)

    2011-06-23

    The large-scale first-principles simulation of the structure and stability of SrTiO{sub 3} nanotubes is performed for the first time using the periodic PBE0 LCAO method. The initial structures of the nanotubes have been obtained by the rolling up of the stoichiometric SrTiO{sub 3} slabs consisting of two or four alternating (001) SrO and TiO{sub 2} atomic planes. Nanotubes (NTs) with chiralities (n,0) and (n,n) have been studied. Two different NTs were constructed for each chirality: (I) with SrO outer shell, and (II) with TiO{sub 2} outer shell. Positions of all atoms have been optimized to obtain the most stable NT structure . In the majority of considered cases the inner or outer TiO{sub 2} shells of NT undergo a considerable reconstruction due to shrinkage or stretching of interatomic distances in the initial cubic perovskite structure. There were found two types of surface reconstruction: (1) breaking of Ti-O bonds with creating of Ti = O titanyl groups in outer surface; (2) inner surface folding due to Ti-O-Ti bending. Based on strain energy calculations the largest stability was found for (n,0) NTs with TiO{sub 2} outer shell.

  16. Relationships between moment magnitude and fault parameters: theoretical and semi-empirical relationships

    Science.gov (United States)

    Wang, Haiyun; Tao, Xiaxin

    2003-12-01

    Fault parameters are important in earthquake hazard analysis. In this paper, theoretical relationships between moment magnitude and fault parameters including subsurface rupture length, downdip rupture width, rupture area, and average slip over the fault surface are deduced based on seismological theory. These theoretical relationships are further simplified by applying similarity conditions and an unique form is established. Then, combining the simplified theoretical relationships between moment magnitude and fault parameters with seismic source data selected in this study, a practical semi-empirical relationship is established. The seismic source data selected is also to used to derive empirical relationships between moment magnitude and fault parameters by the ordinary least square regression method. Comparisons between semi-empirical relationships and empirical relationships show that the former depict distribution trends of data better than the latter. It is also observed that downdip rupture widths of strike slip faults are saturated when moment magnitude is more than 7.0, but downdip rupture widths of dip slip faults are not saturated in the moment magnitude ranges of this study.

  17. The softness of an atom in a molecule and a functional group softness definition; an LCAO scale

    International Nuclear Information System (INIS)

    Giambiagi, M.; Giambiagi, M.S. de; Pires, J.M.; Pitanga, P.

    1987-01-01

    We introduce a scale for the softness of an atom in different molecules and we similarly define a functional group softness. These definitions, unlike previous ones, are not tied to the finite difference approximation neither, hence, to valence state ionization potentials and electron affinities; they result from the LCAO calculation itself. We conclude that a) the softness of an atom in a molecule shows wide variations; b) the geometric average of the softnesses of the atoms in the molecule gives the most consistent results for the molecular softnesses; c) the functional group softness is transferable within a homologous series. (Author) [pt

  18. RHFPPP, SCF-LCAO-MO Calculation for Closed Shell and Open Shell Organic Molecules

    International Nuclear Information System (INIS)

    Bieber, A.; Andre, J.J.

    1987-01-01

    1 - Nature of physical problem solved: Complete program performs SCF-LCAO-MO calculations for both closed and open-shell organic pi-molecules. The Pariser-Parr-People approximations are used with- in the framework of the restricted Hartree-Fock method. The SCF calculation is followed, if desired, by a variational configuration interaction (CI) calculation including singly excited configurations. 2 - Method of solution: A standard procedure is used; at each step a real symmetric matrix has to be diagonalized. The self-consistency is checked by comparing the eigenvectors between two consecutive steps. 3 - Restrictions on the complexity of the problem: i) The calculations are restricted to planar molecules. ii) In order to avoid accumulation of round-off errors, in the iterative procedure, double precision arithmetic is used. iii) The program is restricted to systems up to about 16 atoms; however the size of the systems can easily be modified if required

  19. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  20. Empirical flow parameters : a tool for hydraulic model validity

    Science.gov (United States)

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  1. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  2. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  3. An empirical multivariate log-normal distribution representing uncertainty of biokinetic parameters for 137Cs

    International Nuclear Information System (INIS)

    Miller, G.; Martz, H.; Bertelli, L.; Melo, D.

    2008-01-01

    A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)

  4. Comparison of nuisance parameters in pediatric versus adult randomized trials: a meta-epidemiologic empirical evaluation

    NARCIS (Netherlands)

    Vandermeer, Ben; van der Tweel, Ingeborg; Jansen-van der Weide, Marijke C.; Weinreich, Stephanie S.; Contopoulos-Ioannidis, Despina G.; Bassler, Dirk; Fernandes, Ricardo M.; Askie, Lisa; Saloojee, Haroon; Baiardi, Paola; Ellenberg, Susan S.; van der Lee, Johanna H.

    2018-01-01

    Background: We wished to compare the nuisance parameters of pediatric vs. adult randomized-trials (RCTs) and determine if the latter can be used in sample size computations of the former. Methods: In this meta-epidemiologic empirical evaluation we examined meta-analyses from the Cochrane Database of

  5. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  6. Empirical relations between instrumental and seismic parameters of some strong earthquakes of Colombia

    International Nuclear Information System (INIS)

    Marin Arias, Juan Pablo; Salcedo Hurtado, Elkin de Jesus; Castillo Gonzalez, Hardany

    2008-01-01

    In order to establish the relationships between macroseismic and instrumental parameters, macroseismic field of 28 historical earthquakes that produced great effects in the Colombian territory were studied. The integration of the parameters was made by using the methodology of Kaussel and Ramirez (1992), for great Chilean earthquakes; Kanamori and Anderson (1975) and Coppersmith and Well (1994) for world-wide earthquakes. Once determined the macroseismic and instrumental parameters it was come to establish the model of the source of each earthquake, with which the data base of these parameters was completed. For each earthquake parameters related to the local and normal macroseismic epicenter were complemented, depth of the local and normal center, horizontal extension of both centers, vertical extension of the normal center, model of the source, area of rupture. The obtained empirical relations from linear equations, even show behaviors very similar to the found ones by other authors for other regions of the world and to world-wide level. The results of this work allow establishing that certain mutual non compatibility exists between the area of rupture and the length of rupture determined by the macroseismic methods, with parameters found with instrumental data like seismic moment, Ms magnitude and Mw magnitude.

  7. Next generation of the self-consistent and environment-dependent Hamiltonian: Applications to various boron allotropes from zero- to three-dimensional structures

    Energy Technology Data Exchange (ETDEWEB)

    Tandy, P.; Yu, Ming; Leahy, C.; Jayanthi, C. S.; Wu, S. Y. [Department of Physics and Astronomy, University of Louisville, Louisville, Kentucky 40292 (United States)

    2015-03-28

    An upgrade of the previous self-consistent and environment-dependent linear combination of atomic orbitals Hamiltonian (referred as SCED-LCAO) has been developed. This improved version of the semi-empirical SCED-LCAO Hamiltonian, in addition to the inclusion of self-consistent determination of charge redistribution, multi-center interactions, and modeling of electron-electron correlation, has taken into account the effect excited on the orbitals due to the atomic aggregation. This important upgrade has been subjected to a stringent test, the construction of the SCED-LCAO Hamiltonian for boron. It was shown that the Hamiltonian for boron has successfully characterized the electron deficiency of boron and captured the complex chemical bonding in various boron allotropes, including the planar and quasi-planar, the convex, the ring, the icosahedral, and the fullerene-like clusters, the two-dimensional monolayer sheets, and the bulk alpha boron, demonstrating its transferability, robustness, reliability, and predictive power. The molecular dynamics simulation scheme based on the Hamiltonian has been applied to explore the existence and the energetics of ∼230 compact boron clusters B{sub N} with N in the range from ∼100 to 768, including the random, the rhombohedral, and the spherical icosahedral structures. It was found that, energetically, clusters containing whole icosahedral B{sub 12} units are more stable for boron clusters of larger size (N > 200). The ease with which the simulations both at 0 K and finite temperatures were completed is a demonstration of the efficiency of the SCED-LCAO Hamiltonian.

  8. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  9. An Empirical Study of Parameter Estimation for Stated Preference Experimental Design

    Directory of Open Access Journals (Sweden)

    Fei Yang

    2014-01-01

    Full Text Available The stated preference experimental design can affect the reliability of the parameters estimation in discrete choice model. Some scholars have proposed some new experimental designs, such as D-efficient, Bayesian D-efficient. But insufficient empirical research has been conducted on the effectiveness of these new designs and there has been little comparative analysis of the new designs against the traditional designs. In this paper, a new metro connecting Chengdu and its satellite cities is taken as the research subject to demonstrate the validity of the D-efficient and Bayesian D-efficient design. Comparisons between these new designs and orthogonal design were made by the fit of model and standard deviation of parameters estimation; then the best model result is obtained to analyze the travel choice behavior. The results indicate that Bayesian D-efficient design works better than D-efficient design. Some of the variables can affect significantly the choice behavior of people, including the waiting time and arrival time. The D-efficient and Bayesian D-efficient design for MNL can acquire reliability result in ML model, but the ML model cannot develop the theory advantages of these two designs. Finally, the metro can handle over 40% passengers flow if the metro will be operated in the future.

  10. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation.

    Science.gov (United States)

    Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  11. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-01-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  12. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  13. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, A. Erdem [Atatuerk University, Faculty of Engineering, Department of Environmental Engineering, 25240 Erzurum (Turkey)]. E-mail: aerdemy@atauni.edu.tr; Boncukcuoglu, Recep [Atatuerk University, Faculty of Engineering, Department of Environmental Engineering, 25240 Erzurum (Turkey); Kocakerim, M. Muhtar [Atatuerk University, Faculty of Engineering, Department of Chemical Engineering, 25240 Erzurum (Turkey)

    2007-06-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0mA/cm{sup 2}, initial boron concentration 100mg/L and solution temperature 293K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following;[ECB]=7.6x10{sup 6}x[OH]{sup 0.11}x[CD]{sup 0.62}x[IBC]{sup -0.57}x[DSE]{sup -0.}= {sup 04}x[T]{sup -2.98}x[t] Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  14. An empirical model for parameters affecting energy consumption in boron removal from boron-containing wastewaters by electrocoagulation

    International Nuclear Information System (INIS)

    Yilmaz, A. Erdem; Boncukcuoglu, Recep; Kocakerim, M. Muhtar

    2007-01-01

    In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0mA/cm 2 , initial boron concentration 100mg/L and solution temperature 293K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following;[ECB]=7.6x10 6 x[OH] 0.11 x[CD] 0.62 x[IBC] -0.57 x[DSE] -0.04 x[T] -2.98 x[t] Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.

  15. Empirically modelled Pc3 activity based on solar wind parameters

    Directory of Open Access Journals (Sweden)

    B. Heilig

    2010-09-01

    Full Text Available It is known that under certain solar wind (SW/interplanetary magnetic field (IMF conditions (e.g. high SW speed, low cone angle the occurrence of ground-level Pc3–4 pulsations is more likely. In this paper we demonstrate that in the event of anomalously low SW particle density, Pc3 activity is extremely low regardless of otherwise favourable SW speed and cone angle. We re-investigate the SW control of Pc3 pulsation activity through a statistical analysis and two empirical models with emphasis on the influence of SW density on Pc3 activity. We utilise SW and IMF measurements from the OMNI project and ground-based magnetometer measurements from the MM100 array to relate SW and IMF measurements to the occurrence of Pc3 activity. Multiple linear regression and artificial neural network models are used in iterative processes in order to identify sets of SW-based input parameters, which optimally reproduce a set of Pc3 activity data. The inclusion of SW density in the parameter set significantly improves the models. Not only the density itself, but other density related parameters, such as the dynamic pressure of the SW, or the standoff distance of the magnetopause work equally well in the model. The disappearance of Pc3s during low-density events can have at least four reasons according to the existing upstream wave theory: 1. Pausing the ion-cyclotron resonance that generates the upstream ultra low frequency waves in the absence of protons, 2. Weakening of the bow shock that implies less efficient reflection, 3. The SW becomes sub-Alfvénic and hence it is not able to sweep back the waves propagating upstream with the Alfvén-speed, and 4. The increase of the standoff distance of the magnetopause (and of the bow shock. Although the models cannot account for the lack of Pc3s during intervals when the SW density is extremely low, the resulting sets of optimal model inputs support the generation of mid latitude Pc3 activity predominantly through

  16. Calculation of electron spectra of stoichiometric and nitrogen-deficient zirconium nitrides

    International Nuclear Information System (INIS)

    Ivashchenko, V.I.; Lisenko, A.A.; Zhurakovskij, E.A.; Bekenev, V.L.

    1984-01-01

    English structure using the method of associated plane waves - linear combinations of atom orbitals - coherent potential (APW-LCAO-CP) are given. The calculation results for ZrN electron spectrum indicate availability of a Zr-N binding and a Zr-N antibonding bands. The Fermi level lies in the antibonding metal band. While deffecting from the stoichiometric content the Fermi level simultaneously with filling the metal band shifts towards the Variation of the main kinetic parameters with increasing defectiveness in nitrogen is explained by increasing the number of antibonding collectivized electrons. Application of the combined method of APW-LCAO-CP gives a rather realistic picture of interatomic interaction in ZrNsub(x)

  17. X-ray spectrum analysis of multi-component samples by a method of fundamental parameters using empirical ratios

    International Nuclear Information System (INIS)

    Karmanov, V.I.

    1986-01-01

    A type of the fundamental parameter method based on empirical relation of corrections for absorption and additional-excitation with absorbing characteristics of samples is suggested. The method is used for X-ray fluorescence analysis of multi-component samples of charges of welded electrodes. It is shown that application of the method is justified only for determination of titanium, calcium and silicon content in charges taking into account only corrections for absorption. Irn and manganese content can be calculated by the simple method of the external standard

  18. The Use of Asymptotic Functions for Determining Empirical Values of CN Parameter in Selected Catchments of Variable Land Cover

    Directory of Open Access Journals (Sweden)

    Wałęga Andrzej

    2017-12-01

    Full Text Available The aim of the study was to assess the applicability of asymptotic functions for determining the value of CN parameter as a function of precipitation depth in mountain and upland catchments. The analyses were carried out in two catchments: the Rudawa, left tributary of the Vistula, and the Kamienica, right tributary of the Dunajec. The input material included data on precipitation and flows for a multi-year period 1980–2012, obtained from IMGW PIB in Warsaw. Two models were used to determine empirical values of CNobs parameter as a function of precipitation depth: standard Hawkins model and 2-CN model allowing for a heterogeneous nature of a catchment area.

  19. AN EMPIRICAL CALIBRATION TO ESTIMATE COOL DWARF FUNDAMENTAL PARAMETERS FROM H-BAND SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Newton, Elisabeth R.; Charbonneau, David; Irwin, Jonathan [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Mann, Andrew W., E-mail: enewton@cfa.harvard.edu [Astronomy Department, University of Texas at Austin, Austin, TX 78712 (United States)

    2015-02-20

    Interferometric radius measurements provide a direct probe of the fundamental parameters of M dwarfs. However, interferometry is within reach for only a limited sample of nearby, bright stars. We use interferometrically measured radii, bolometric luminosities, and effective temperatures to develop new empirical calibrations based on low-resolution, near-infrared spectra. We find that H-band Mg and Al spectral features are good tracers of stellar properties, and derive functions that relate effective temperature, radius, and log luminosity to these features. The standard deviations in the residuals of our best fits are, respectively, 73 K, 0.027 R {sub ☉}, and 0.049 dex (an 11% error on luminosity). Our calibrations are valid from mid K to mid M dwarf stars, roughly corresponding to temperatures between 3100 and 4800 K. We apply our H-band relationships to M dwarfs targeted by the MEarth transiting planet survey and to the cool Kepler Objects of Interest (KOIs). We present spectral measurements and estimated stellar parameters for these stars. Parallaxes are also available for many of the MEarth targets, allowing us to independently validate our calibrations by demonstrating a clear relationship between our inferred parameters and the stars' absolute K magnitudes. We identify objects with magnitudes that are too bright for their inferred luminosities as candidate multiple systems. We also use our estimated luminosities to address the applicability of near-infrared metallicity calibrations to mid and late M dwarfs. The temperatures we infer for the KOIs agree remarkably well with those from the literature; however, our stellar radii are systematically larger than those presented in previous works that derive radii from model isochrones. This results in a mean planet radius that is 15% larger than one would infer using the stellar properties from recent catalogs. Our results confirm the derived parameters from previous in-depth studies of KOIs 961 (Kepler

  20. Empirical parameters for solvent acidity, basicity, dipolarity, and polarizability of the ionic liquids [BMIM][BF4] and [BMIM][PF6].

    Science.gov (United States)

    del Valle, J C; García Blanco, F; Catalán, J

    2015-04-02

    The empirical solvent scales for polarizability (SP), dipolarity (SdP), acidity (SA), and basicity (SB) have been successfully used to interpret the solvatochromism of compounds dissolved in organic solvents and their solvent mixtures. Providing that the published solvatochromic parameters for the ionic liquids 1-(1-butyl)-3-methylimidazolium tetrafluoroborate, [BMIM][BF4] and 1-(1-butyl)-3-methylimidazolium hexafluorophosphate, [BMIM][PF6], are excessively widespread, their SP, SdP, SA, and SB values are measured herein at temperatures from 293 to 353 K. Four key points are emphasized herein: (i) the origin of the solvatochromic solvent scales--the gas phase, that is the absence of any medium perturbation--; (ii) the separation of the polarizability and dipolarity effects; (iii) the simplification of the probing process in order to obtain the solvatochromic parameters; and (iv) the SP, SdP, SA, and SB solvent scales can probe the polarizability, dipolarity, acidity, and basicity of ionic liquids as well as of organic solvents and water-organic solvent mixtures. From the multiparameter approach using the four pure solvent scales one can draw the conclusion that (a) the solvent influence of [BMIM][BF4] parallels that of formamide at 293 K, both of them miscible with water; (b) [BMIM][PF6] shows a set of solvatochromic parameters similar to that of chloroacetonitrile, both of them water insoluble; and (c) that the corresponding solvent acidity and basicity of the ionic liquids can be explained to a great extent from the cation species by comparing the empirical parameters of [BMIM](+) with those of the solvent 1-methylimidazole. The insolubility of [BMIM][PF6] in water as compared to [BMIM][BF4] is tentatively connected to some extent to the larger molar volume of the anion [PF6](-), and to the difference in basicity of [PF6](-) and [BF4](-).

  1. Tests of Parameters Instability: Theoretical Study and Empirical Applications on Two Types of Models (ARMA Model and Market Model

    Directory of Open Access Journals (Sweden)

    Sahbi FARHANI

    2012-01-01

    Full Text Available This paper considers tests of parameters instability and structural change with known, unknown or multiple breakpoints. The results apply to a wide class of parametric models that are suitable for estimation by strong rules for detecting the number of breaks in a time series. For that, we use Chow, CUSUM, CUSUM of squares, Wald, likelihood ratio and Lagrange multiplier tests. Each test implicitly uses an estimate of a change point. We conclude with an empirical analysis on two different models (ARMA model and simple linear regression model.

  2. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    Science.gov (United States)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  3. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1987-01-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single man, which can be processed far faster. It is assumed for this method that a conventional program exists which can perform faithful tracking in the lattice under study for some hundreds of turns, with all lattice parameters held constant. An empirical map is then generated by comparison with the tracking program. A procedure has been outlined for determining an empirical Hamiltonian, which can represent motion through many nonlinear kicks, by taking data from a conventional tracking program. Though derived by an approximate method this Hamiltonian is analytic in form and can be subjected to further analysis of varying degrees of mathematical rigor. Even though the empirical procedure has only been described in one transverse dimension, there is good reason to hope that it can be extended to include two transverse dimensions, so that it can become a more practical tool in realistic cases

  4. Alternative Approaches to Evaluation in Empirical Microeconomics

    Science.gov (United States)

    Blundell, Richard; Dias, Monica Costa

    2009-01-01

    This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…

  5. Evaluation of Empirical and Machine Learning Algorithms for Estimation of Coastal Water Quality Parameters

    Directory of Open Access Journals (Sweden)

    Majid Nazeer

    2017-11-01

    Full Text Available Coastal waters are one of the most vulnerable resources that require effective monitoring programs. One of the key factors for effective coastal monitoring is the use of remote sensing technologies that significantly capture the spatiotemporal variability of coastal waters. Optical properties of coastal waters are strongly linked to components, such as colored dissolved organic matter (CDOM, chlorophyll-a (Chl-a, and suspended solids (SS concentrations, which are essential for the survival of a coastal ecosystem and usually independent of each other. Thus, developing effective remote sensing models to estimate these important water components based on optical properties of coastal waters is mandatory for a successful coastal monitoring program. This study attempted to evaluate the performance of empirical predictive models (EPM and neural networks (NN-based algorithms to estimate Chl-a and SS concentrations, in the coastal area of Hong Kong. Remotely-sensed data over a 13-year period was used to develop regional and local models to estimate Chl-a and SS over the entire Hong Kong waters and for each water class within the study area, respectively. The accuracy of regional models derived from EPM and NN in estimating Chl-a and SS was 83%, 93%, 78%, and 97%, respectively, whereas the accuracy of local models in estimating Chl-a and SS ranged from 60–94% and 81–94%, respectively. Both the regional and local NN models exhibited a higher performance than those models derived from empirical analysis. Thus, this study suggests using machine learning methods (i.e., NN for the more accurate and efficient routine monitoring of coastal water quality parameters (i.e., Chl-a and SS concentrations over the complex coastal area of Hong Kong and other similar coastal environments.

  6. An empirical formula for scattered neutron components in fast neutron radiography

    International Nuclear Information System (INIS)

    Dou Haifeng; Tang Bin

    2011-01-01

    Scattering neutrons are one of the key factors that may affect the images of fast neutron radiography. In this paper, a mathematical model for scattered neutrons is developed on a cylinder sample, and an empirical formula for scattered neutrons is obtained. According to the results given by Monte Carlo methods, the parameters in the empirical formula are obtained with curve fitting, which confirms the logicality of the empirical formula. The curve-fitted parameters of common materials such as 6 LiD are given. (authors)

  7. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  8. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  9. Calculation of dynamic and electronic properties of perfect and defect crystals by semiempirical quantum mechanical methods

    International Nuclear Information System (INIS)

    Zunger, A.

    1975-07-01

    Semiempirical all-valence-electron LCAO methods, that were previously used to study the electronic structure of molecules are applied to three problems in solid state physics: the electronic band structure of covalent crystals, point defect problems in solids and lattice dynamical study of molecular crystals. Calculation methods for the electronic band structure of regular solids are introduced and problems regarding the computation of the density matrix in solids are discussed. Three models for treating the electronic eigenvalue problem in the solid, within the proposed calculation schemes, are discussed and the proposed models and calculation schemes are applied to the calculation of the electronic structure of several solids belonging to different crystal types. The calculation models also describe electronic properties of deep defects in covalent insulating crystals. The possible usefulness of the semieipirical LCAO methods in determining the first order intermolecular interaction potential in solids and an improved model for treating the lattice dynamics and related thermodynamical properties of molecular solids are presented. The improved lattice dynamical is used to compute phonon dispersion curves, phonon density of states, stable unit cell structure, lattice heat capacity and thermal crystal parameters, in α and γ-N 2 crystals, using the N 2 -N 2 intermolecular interaction potential that has been computed from the semiempirical LCAO methods. (B.G.)

  10. Investigation of hydrodynamic parameters in a novel expanded bed configuration: local axial dispersion characterization and an empirical correlation study

    Directory of Open Access Journals (Sweden)

    E. S. Taheri

    2012-12-01

    Full Text Available Study of liquid behavior in an expanded bed adsorption (EBA system is important for understanding, modeling and predicting nanobioproduct/biomolecule adsorption performance in such processes. In this work, in order to analyze the local axial dispersion parameters, simple custom NBG (Nano Biotechnology Group expanded bed columns with 10 and 26 mm inner diameter were modified by insertion of sampling holes. Based on this configuration, the particles and liquid can be withdrawn directly from various axial positions of the columns. Streamline DEAE particles were used as solid phase in this work. The effects of factors such as liquid velocity, viscosity, settled bed height and column diameter on the hydrodynamic parameters were investigated. Local bed voidages in different axial bed positions were measured by a direct procedure within the column with 26 mm diameter. Increasing trend of voidage with velocity at a certain position of the bed and with bed height at a certain degree of expansion was observed. Residence time distribution (RTD analysis at various bed points showed approximately uniform hydrodynamic behavior in the column with 10 mm diameter while a decreasing trend of mixing/dispersion along the bed height at a certain degree of expansion was seen in the column with 26 mm diameter. Also lower mixing/dispersion occured in the smaller diameter column. Finally, a combination of two empirical correlations proposed by Richardson-Zaki and Tong-Sun was successfully employed for identification of the bed voidage at various bed heights (RSSE=99.9%. Among the empirical correlations presented in the literatures for variation of the axial dispersion coefficient, the Yun correlation gave good agreement with our experimental data (RSSE=87% in this column.

  11. An empirical comparison of Item Response Theory and Classical Test Theory

    Directory of Open Access Journals (Sweden)

    Špela Progar

    2008-11-01

    Full Text Available Based on nonlinear models between the measured latent variable and the item response, item response theory (IRT enables independent estimation of item and person parameters and local estimation of measurement error. These properties of IRT are also the main theoretical advantages of IRT over classical test theory (CTT. Empirical evidence, however, often failed to discover consistent differences between IRT and CTT parameters and between invariance measures of CTT and IRT parameter estimates. In this empirical study a real data set from the Third International Mathematics and Science Study (TIMSS 1995 was used to address the following questions: (1 How comparable are CTT and IRT based item and person parameters? (2 How invariant are CTT and IRT based item parameters across different participant groups? (3 How invariant are CTT and IRT based item and person parameters across different item sets? The findings indicate that the CTT and the IRT item/person parameters are very comparable, that the CTT and the IRT item parameters show similar invariance property when estimated across different groups of participants, that the IRT person parameters are more invariant across different item sets, and that the CTT item parameters are at least as much invariant in different item sets as the IRT item parameters. The results furthermore demonstrate that, with regards to the invariance property, IRT item/person parameters are in general empirically superior to CTT parameters, but only if the appropriate IRT model is used for modelling the data.

  12. Data on inelastic processes in low-energy potassium-hydrogen and rubidium-hydrogen collisions

    Science.gov (United States)

    Yakovleva, S. A.; Barklem, P. S.; Belyaev, A. K.

    2018-01-01

    Two sets of rate coefficients for low-energy inelastic potassium-hydrogen and rubidium-hydrogen collisions were computed for each collisional system based on two model electronic structure calculations, performed by the quantum asymptotic semi-empirical and the quantum asymptotic linear combinations of atomic orbitals (LCAO) approaches, followed by quantum multichannel calculations for the non-adiabatic nuclear dynamics. The rate coefficients for the charge transfer (mutual neutralization, ion-pair formation), excitation and de-excitation processes are calculated for all transitions between the five lowest lying covalent states and the ionic states for each collisional system for the temperature range 1000-10 000 K. The processes involving higher lying states have extremely low rate coefficients and, hence, are neglected. The two model calculations both single out the same partial processes as having large and moderate rate coefficients. The largest rate coefficients correspond to the mutual neutralization processes into the K(5s 2S) and Rb(4d 2D) final states and at temperature 6000 K have values exceeding 3 × 10-8 cm3 s-1 and 4 × 10-8 cm3 s-1, respectively. It is shown that both the semi-empirical and the LCAO approaches perform equally well on average and that both sets of atomic data have roughly the same accuracy. The processes with large and moderate rate coefficients are likely to be important for non-LTE modelling in atmospheres of F, G and K-stars, especially metal-poor stars.

  13. PWR surveillance based on correspondence between empirical models and physical

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Upadhyaya, B.R.; Kerlin, T.W.

    1976-01-01

    An on line surveillance method based on the correspondence between empirical models and physicals models is proposed for pressurized water reactors. Two types of empirical models are considered as well as the mathematical models defining the correspondence between the physical and empirical parameters. The efficiency of this method is illustrated for the surveillance of the Doppler coefficient for Oconee I (an 886 MWe PWR) [fr

  14. Empirical scholarship in contract law: possibilities and pitfalls

    Directory of Open Access Journals (Sweden)

    Russell Korobkin

    2015-01-01

    Full Text Available Professor Korobkin examines and analyzes empirical contract law scholarship over the last fifteen years in an attempt to guide scholars concerning how empiricism can be used in and enhance the study of contract law. After defining the parameters of the study, Professor Korobkin categorizes empirical contract law scholarship by both the source of data and main purpose of the investigation. He then describes and analyzes three types of criticisms that can be made of empirical scholarship, explains how these criticisms pertain to contract law scholarship, and considers what steps researchers can take to minimize the force of such criticisms.

  15. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, Scott R [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States); Aourag, Hafid [Department of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Rajan, Krishna [Department of Materials Science and Engineering and Institute for Combinatorial Discovery, Iowa State University, Ames, IA 50011 (United States)

    2011-05-15

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: {yields} We developed an informatics-based methodology to minimize the necessary information. {yields} We applied this methodology to descriptors from semi-empirical calculations. {yields} We developed a validation approach for maintaining information from screening. {yields} We classified intermetallics and identified patterns of composition and structure.

  16. Data mining of Ti-Al semi-empirical parameters for developing reduced order models

    International Nuclear Information System (INIS)

    Broderick, Scott R.; Aourag, Hafid; Rajan, Krishna

    2011-01-01

    A focus of materials design is determining the minimum amount of information necessary to fully describe a system, thus reducing the number of empirical results required and simplifying the data analysis. Screening descriptors calculated through a semi-empirical model, we demonstrate how an informatics-based analysis can be used to address this issue with no prior assumptions. We have developed a unique approach for identifying the minimum number of descriptors necessary to capture all the information of a system. Using Ti-Al alloys of varying compositions and crystal chemistries as the test bed, 5 of the 21 original descriptors from electronic structure calculations are found to capture all the information from the calculation, thereby reducing the structure-chemistry-property search space. Additionally, by combining electronic structure calculations with data mining, we classify the systems by chemistries and structures, based on the electronic structure inputs, and thereby rank the impact of change in chemistry and crystal structure on the electronic structure. -- Research Highlights: → We developed an informatics-based methodology to minimize the necessary information. → We applied this methodology to descriptors from semi-empirical calculations. → We developed a validation approach for maintaining information from screening. → We classified intermetallics and identified patterns of composition and structure.

  17. Assessment of radiological parameters and patient dose audit using semi-empirical model

    International Nuclear Information System (INIS)

    Olowookere, C.J.; Onabiyi, B.; Ajumobi, S. A.; Obed, R.I.; Babalola, I. A.; Bamidele, L.

    2011-01-01

    Risk is associated with all human activities, medical imaging is no exception. The risk in medical imaging is quantified using effective dose. However, measurement of effective dose is rather difficult and time consuming, therefore, energy imparted and entrance surface dose are obtained and converted into effective dose using the appropriate conversion factors. In this study, data on exposure parameters and patient characteristics were obtained during the routine diagnostic examinations for four common types of X-ray procedures. A semi-empirical model involving computer software Xcomp5 was used to determine energy imparted per unit exposure-area product, entrance skin exposure(ESE) and incident air kerma which are radiation dose indices. The value of energy imparted per unit exposure-area product ranges between 0.60 and 1.21x 10 -3 JR -1 cm -2 and entrance skin exposure range from 5.07±1.25 to 36.62±27.79 mR, while the incident air kerma range between 43.93μGy and 265.5μGy. The filtrations of two of the three machines investigated were lower than the standard requirement of CEC for the machines used in conventional radiography. The values of and ESE obtained in the study were relatively lower compared to the published data, indicating that patients irradiated during the routine examinations in this study are at lower health risk. The energy imparted per unit exposure- area product could be used to determine the energy delivered to the patient during diagnostic examinations, and it is an approximate indicator of patient risk.

  18. Sensitivity of traffic input parameters on rutting performance of a flexible pavement using Mechanistic Empirical Pavement Design Guide

    Directory of Open Access Journals (Sweden)

    Nur Hossain

    2016-11-01

    Full Text Available The traffic input parameters in the Mechanistic Empirical Pavement Design Guide (MEPDG are: (a general traffic inputs, (b traffic volume adjustment factors, and (c axle load spectra (ALS. Of these three traffic inputs, the traffic volume adjustment factors specifically monthly adjustment factor (MAF and the ALS are widely considered to be important and sensitive factors, which can significantly affect design of and prediction of distress in flexible pavements. Therefore, the present study was undertaken to assess the sensitivity of ALS and MAF traffic inputs on rutting distress of a flexible pavement. The traffic data of four years (from 2008 to 2012 were collected from an instrumented test section on I-35 in Oklahoma. Site specific traffic input parameters were developed. It was observed that significant differences exist between the MEPDG default and developed site-specific traffic input values. However, the differences in the yearly ALS and MAF data, developed for these four years, were not found to be as significant when compared to one another. In addition, quarterly field rut data were measured on the test section and compared with the MEPDG predicted rut values using the default and developed traffic input values for different years. It was found that significant differences exist between the measured rut and the MEPDG (AASHTOWare-ME predicted rut when default values were used. Keywords: MEPDG, Rut, Level 1 inputs, Axle load spectra, Traffic input parameters, Sensitivity

  19. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  20. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  1. Electronic structure of the Y Ba2 Cu3 O7-x high temperature superconductor ceramic

    International Nuclear Information System (INIS)

    Lima, G.A.R.

    1990-01-01

    We investigate the electronic structure of superconductor Y Ba 2 Cu 3 O 7-x through a molecular cluster approach. The calculations are performed self consistently through a semi empirical L.C.A.O. technique, where different charge states are considered. The correlation effects are taken into account by configuration interaction procedure (INDO/CI). The results for the larger cluster yield a density of states showing a strong p-d covalency resulting in a width of around 8,0 eV for the valence band. The optical excitations is analyzed in detail and compared with the experimental data. (author)

  2. Empirical estimation of school siting parameter towards improving children's safety

    Science.gov (United States)

    Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.

    2014-02-01

    Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.

  3. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  4. Electronic response of rare-earth magnetic-refrigeration compounds GdX2 (X = Fe and Co)

    Science.gov (United States)

    Bhatt, Samir; Ahuja, Ushma; Kumar, Kishor; Heda, N. L.

    2018-05-01

    We present the Compton profiles (CPs) of rare-earth-transition metal compounds GdX2 (X = Fe and Co) using 740 GBq 137Cs Compton spectrometer. To compare the experimental momentum densities, we have also computed the CPs, electronic band structure, density of states (DOS) and Mulliken population (MP) using linear combination of atomic orbitals (LCAO) method. Local density and generalized gradient approximations within density functional theory (DFT) along with the hybridization of Hartree-Fock and DFT (B3LYP and PBE0) have been considered under the framework of LCAO scheme. It is seen that the LCAO-B3LYP based momentum densities give a better agreement with the experimental data for both the compounds. The energy bands and DOS for both the spin-up and spin-down states show metallic like character of the reported intermetallic compounds. The localization of 3d electrons of Co and Fe has also been discussed in terms of equally normalized CPs and MP data. Discussion on magnetization using LCAO method is also included.

  5. Semi-empirical neutron tool calibration (one and two-group approximation)

    International Nuclear Information System (INIS)

    Czubek, J.A.

    1988-01-01

    The physical principles of the new method of calibration of neutron tools for the rock porosity determination are given. A short description of the physics of neutron transport in the matter is presented together with some remarks on the elementary interactions of neutrons with nuclei (cross sections, group cross sections etc.). The definitions of the main integral parameters characterizing the neutron transport in the rock media are given. The three main approaches to the calibration problem: empirical, theoretical and semi-empirical are presented with some more detailed description of the latter one. The new semi-empirical approach is described. The method is based on the definition of the apparent slowing down or migration length for neutrons sensed by the neutron tool situated in the real borehole-rock conditions. To calculate this apparent slowing down or migration lengths the ratio of the proper space moments of the neutron distribution along the borehole axis is used. Theoretical results are given for one- and two-group diffusion approximations in the rock-borehole geometrical conditions when the tool is in the sidewall position. The physical and chemical parameters are given for the calibration blocks of the Logging Company in Zielona Gora. Using these data the neutron parameters of the calibration blocks have been calculated. An example, how to determine the calibration curve for the dual detector tool applying this new method and using the neutron parameters mentioned above together with the measurements performed in the calibration blocks, is given. The most important advantage of the new semi-empirical method of calibration is the possibility of setting on the unique calibration curve all experimental calibration data obtained for a given neutron tool for different porosities, lithologies and borehole diameters. 52 refs., 21 figs., 21 tabs. (author)

  6. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    Science.gov (United States)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David

    2018-05-01

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.

  7. Determination of empirical relations between geoelectrical data and ...

    African Journals Online (AJOL)

    In order to establish empirical equations that relate layer resistivity values with geotechnical parameters for engineering site characterization, geotechnical tests comprising Standard Penetration Test (SPT), Atterberg limit, Triaxial Compression and Oedometer consolidation tests were conducted on soil samples collected ...

  8. Empirical Scaling Laws of Neutral Beam Injection Power in HL-2A Tokamak

    International Nuclear Information System (INIS)

    Cao Jian-Yong; Wei Hui-Ling; Liu He; Yang Xian-Fu; Zou Gui-Qing; Yu Li-Ming; Li Qing; Luo Cui-Wen; Pan Yu-Dong; Jiang Shao-Feng; Lei Guang-Jiu; Li Bo; Rao Jun; Duan Xu-Ru

    2015-01-01

    We present an experimental method to obtain neutral beam injection (NBI) power scaling laws with operating parameters of the NBI system on HL-2A, including the beam divergence angle, the beam power transmission efficiency, the neutralization efficiency and so on. With the empirical scaling laws, the estimating power can be obtained in every shot of experiment on time, therefore the important parameters such as the energy confinement time can be obtained precisely. The simulation results by the tokamak simulation code (TSC) show that the evolution of the plasma parameters is in good agreement with the experimental results by using the NBI power from the empirical scaling law. (paper)

  9. Empirical microeconomics action functionals

    Science.gov (United States)

    Baaquie, Belal E.; Du, Xin; Tanputraman, Winson

    2015-06-01

    A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).

  10. A New Empirical Model for Radar Scattering from Bare Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2016-11-01

    Full Text Available The objective of this paper is to propose a new semi-empirical radar backscattering model for bare soil surfaces based on the Dubois model. A wide dataset of backscattering coefficients extracted from synthetic aperture radar (SAR images and in situ soil surface parameter measurements (moisture content and roughness is used. The retrieval of soil parameters from SAR images remains challenging because the available backscattering models have limited performances. Existing models, physical, semi-empirical, or empirical, do not allow for a reliable estimate of soil surface geophysical parameters for all surface conditions. The proposed model, developed in HH, HV, and VV polarizations, uses a formulation of radar signals based on physical principles that are validated in numerous studies. Never before has a backscattering model been built and validated on such an important dataset as the one proposed in this study. It contains a wide range of incidence angles (18°–57° and radar wavelengths (L, C, X, well distributed, geographically, for regions with different climate conditions (humid, semi-arid, and arid sites, and involving many SAR sensors. The results show that the new model shows a very good performance for different radar wavelengths (L, C, X, incidence angles, and polarizations (RMSE of about 2 dB. This model is easy to invert and could provide a way to improve the retrieval of soil parameters.

  11. Assessment of empirical formulae for local response of concrete structures to hard projectile impact

    International Nuclear Information System (INIS)

    Buzaud, E.; Cazaubon, Ch.; Chauvel, D.

    2007-01-01

    The outcome of the impact of a hard projectile on a reinforced concrete structure is affected by different parameters such as the configuration of the interaction, the projectile geometry, mass and velocity and the target geometry, reinforcement, and concrete mechanical properties. Those parameters have been investigated experimentally during the last 30 years, hence providing a basis of simplified mathematical models like empirical formulae. The aim of the authors is to assess the relative performances of classical and more recent empirical formulae. (authors)

  12. Electronic structure of the Y Ba{sub 2} Cu{sub 3} O{sub 7-x} high temperature superconductor ceramic; Estrutura eletronica da ceramica supercondutora de alta temperatura Y Ba{sub 2} Cu{sub 3} O{sub 7-x}

    Energy Technology Data Exchange (ETDEWEB)

    Lima, G A.R. [UNESP, Guaratingueta, SP (Brazil). Faculdade de Engenharia. Dept. de Fisica e Quimica

    1991-12-31

    We investigate the electronic structure of superconductor Y Ba{sub 2} Cu{sub 3} O{sub 7-x} through a molecular cluster approach. The calculations are performed self consistently through a semi empirical L.C.A.O. technique, where different charge states are considered. The correlation effects are taken into account by configuration interaction procedure (INDO/CI). The results for the larger cluster yield a density of states showing a strong p-d covalency resulting in a width of around 8,0 eV for the valence band. The optical excitations is analyzed in detail and compared with the experimental data. (author) 18 refs., 2 figs., 1 tab.

  13. An Empirical Bayes Approach to Mantel-Haenszel DIF Analysis.

    Science.gov (United States)

    Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles

    1999-01-01

    Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)

  14. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    Science.gov (United States)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that

  15. Surface Passivation in Empirical Tight Binding

    OpenAIRE

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2015-01-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameter...

  16. Empirical correlation between mechanical and physical parameters of irradiated pressure vessel steels

    International Nuclear Information System (INIS)

    Tipping, P.; Solt, G.; Waeber, W.

    1991-02-01

    Neutron irradiation embrittlement of nuclear reactor pressure vessel (PV) steels is one of the best known ageing factors of nuclear power plants. If the safety limits set by the regulators for the PV steel are not satisfied any more, and other measures are too expensive for the economics of the plant, this embrittlement could lead to the closure of the plant. Despite this, the fundamental mechanisms of neutron embrittlement are not yet fully understood, and usually only empirical mathematical models exist to asses neutron fluence effects on embrittlement, as given by the Charpy test for example. In this report, results of a systematic study of a French forging (1.2 MD 07 B), irradiated to several fluences will be reported. Mechanical property measurements (Charpy tensile and Vickers microhardness), and physical property measurements (small angle neutron scattering - SANS), have been done on specimens having the same irradiation or irradiation-annealing-reirradiation treatment histories. Empirical correlations have been established between the temperature shift and the decrease in the upper shelf energy as measured on Charpy specimens and tensile stresses and hardness increases on the one hand, and the size of the copper-rich precipitates formed by the irradiation on the other hand. The effect of copper (as an impurity element) in enhancing the degradation of mechanical properties has been demonstrated; the SANS measurements have shown that the size and amount of precipitates are important. The correlations represent the first step in an effort to develop a description of neutron irradiation induced embrittlement which is based on physical models. (author) 6 figs., 27 refs

  17. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  18. Gravitation theory - Empirical status from solar system experiments.

    Science.gov (United States)

    Nordtvedt, K. L., Jr.

    1972-01-01

    Review of historical and recent experiments which speak in favor of a post-Newtonian relativistic gravitational theory. The topics include the foundational experiments, metric theories of gravity, experiments designed to differentiate among the metric theories, and tests of Machian concepts of gravity. It is shown that the metric field for any metric theory can be specified by a series of potential terms with several parameters. It is pointed out that empirical results available up to date yield values of the parameters which are consistent with the prediction of Einstein's general relativity.

  19. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  20. Semi-empirical correlation for binary interaction parameters of the Peng-Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor-liquid equilibrium.

    Science.gov (United States)

    Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O

    2013-03-01

    Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  1. Physico-empirical approach for mapping soil hydraulic behaviour

    Directory of Open Access Journals (Sweden)

    G. D'Urso

    1997-01-01

    Full Text Available Abstract: Pedo-transfer functions are largely used in soil hydraulic characterisation of large areas. The use of physico-empirical approaches for the derivation of soil hydraulic parameters from disturbed samples data can be greatly enhanced if a characterisation performed on undisturbed cores of the same type of soil is available. In this study, an experimental procedure for deriving maps of soil hydraulic behaviour is discussed with reference to its application in an irrigation district (30 km2 in southern Italy. The main steps of the proposed procedure are: i the precise identification of soil hydraulic functions from undisturbed sampling of main horizons in representative profiles for each soil map unit; ii the determination of pore-size distribution curves from larger disturbed sampling data sets within the same soil map unit. iii the calibration of physical-empirical methods for retrieving soil hydraulic parameters from particle-size data and undisturbed soil sample analysis; iv the definition of functional hydraulic properties from water balance output; and v the delimitation of soil hydraulic map units based on functional properties.

  2. Vibrational and Thermal Properties of Oxyanionic Crystals

    Science.gov (United States)

    Korabel'nikov, D. V.

    2018-03-01

    The vibrational and thermal properties of dolomite and alkali chlorates and perchlorates were studied in the gradient approximation of density functional theory using the method of a linear combination of atomic orbitals (LCAO). Long-wave vibration frequencies, IR and Raman spectra, and mode Gruneisen parameters were calculated. Equation-of-state parameters, thermodynamic potentials, entropy, heat capacity, and thermal expansion coefficient were also determined. The thermal expansion coefficient of dolomite was established to be much lower than for chlorates and perchlorates. The temperature dependence of the heat capacity at T > 200 K was shown to be generally governed by intramolecular vibrations.

  3. Empirical tight-binding parameters for solid C60

    International Nuclear Information System (INIS)

    Tit, N.; Kumar, V.

    1993-01-01

    We present a tight-binding model for the electronic structure of C 60 using four (1s and 3p) orbitals per carbon atom. The model has been developed by fitting the tight-binding parameters to the ab-initio pseudopotential calculation of Troullier and Martins (Phys. Rev. B46, 1754 (1992)) in the face-centered cubic (Fm3-bar) phase. Following this, calculations of the energy bands and the density of electronic states have been carried out as a function of the lattice constant. Good agreement has been obtained with the observed lattice-constant dependence of T c using McMillan's formula. Furthermore, calculations of the electronic structure are presented in the simple cubic (Pa3-bar) phase. (author). 43 refs, 3 figs, 1 tab

  4. Fission in Empire-II version 2.19 beta1, Lodi

    International Nuclear Information System (INIS)

    Sin, M.

    2003-01-01

    This is a description of the fission model implemented presently in EMPIRE-II. This package offers two ways to calculate the fission probability selected by parameters in the optional input. Fission barriers, fission transmission coefficients, fission cross sections and fission files are calculated

  5. Empirical data and moral theory. A plea for integrated empirical ethics.

    Science.gov (United States)

    Molewijk, Bert; Stiggelbout, Anne M; Otten, Wilma; Dupuis, Heleen M; Kievit, Job

    2004-01-01

    Ethicists differ considerably in their reasons for using empirical data. This paper presents a brief overview of four traditional approaches to the use of empirical data: "the prescriptive applied ethicists," "the theorists," "the critical applied ethicists," and "the particularists." The main aim of this paper is to introduce a fifth approach of more recent date (i.e. "integrated empirical ethics") and to offer some methodological directives for research in integrated empirical ethics. All five approaches are presented in a table for heuristic purposes. The table consists of eight columns: "view on distinction descriptive-prescriptive sciences," "location of moral authority," "central goal(s)," "types of normativity," "use of empirical data," "method," "interaction empirical data and moral theory," and "cooperation with descriptive sciences." Ethicists can use the table in order to identify their own approach. Reflection on these issues prior to starting research in empirical ethics should lead to harmonization of the different scientific disciplines and effective planning of the final research design. Integrated empirical ethics (IEE) refers to studies in which ethicists and descriptive scientists cooperate together continuously and intensively. Both disciplines try to integrate moral theory and empirical data in order to reach a normative conclusion with respect to a specific social practice. IEE is not wholly prescriptive or wholly descriptive since IEE assumes an interdepence between facts and values and between the empirical and the normative. The paper ends with three suggestions for consideration on some of the future challenges of integrated empirical ethics.

  6. Semi-empirical correlation for binary interaction parameters of the Peng–Robinson equation of state with the van der Waals mixing rules for the prediction of high-pressure vapor–liquid equilibrium

    Directory of Open Access Journals (Sweden)

    Seif-Eddeen K. Fateen

    2013-03-01

    Full Text Available Peng–Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij. In this work, we developed a semi-empirical correlation for kij partly based on the Huron–Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.

  7. On the validity of evolutionary models with site-specific parameters.

    Directory of Open Access Journals (Sweden)

    Konrad Scheffler

    Full Text Available Evolutionary models that make use of site-specific parameters have recently been criticized on the grounds that parameter estimates obtained under such models can be unreliable and lack theoretical guarantees of convergence. We present a simulation study providing empirical evidence that a simple version of the models in question does exhibit sensible convergence behavior and that additional taxa, despite not being independent of each other, lead to improved parameter estimates. Although it would be desirable to have theoretical guarantees of this, we argue that such guarantees would not be sufficient to justify the use of these models in practice. Instead, we emphasize the importance of taking the variance of parameter estimates into account rather than blindly trusting point estimates - this is standardly done by using the models to construct statistical hypothesis tests, which are then validated empirically via simulation studies.

  8. A comparison of the performance of a fundamental parameter method for analysis of total reflection X-ray fluorescence spectra and determination of trace elements, versus an empirical quantification procedure

    Science.gov (United States)

    W(egrzynek, Dariusz; Hołyńska, Barbara; Ostachowicz, Beata

    1998-01-01

    The performance has been compared of two different quantification methods — namely, the commonly used empirical quantification procedure and a fundamental parameter approach — for determination of the mass fractions of elements in particulate-like sample residues on a quartz reflector measured in the total reflection geometry. In the empirical quantification procedure, the spectrometer system needs to be calibrated with the use of samples containing known concentrations of the elements. On the basis of intensities of the X-ray peaks and the known concentration or mass fraction of an internal standard element, by using relative sensitivities of the spectrometer system the concentrations or mass fractions of the elements are calculated. The fundamental parameter approach does not require any calibration of the spectrometer system to be carried out. However, in order to account for an unknown mass per unit area of a sample and sample nonuniformity, an internal standard element is added. The concentrations/mass fractions of the elements to be determined are calculated during fitting a modelled X-ray spectrum to the measured one. The two quantification methods were applied to determine the mass fractions of elements in the cross-sections of a peat core, biological standard reference materials and to determine the concentrations of elements in samples prepared from an aqueous multi-element standard solution.

  9. Relationship between process parameters and properties of multifunctional needlepunched geotextiles

    CSIR Research Space (South Africa)

    Rawal, A

    2006-04-01

    Full Text Available , and filtration. In this study, the effect of process parameters, namely, feed rate, stroke frequency, and depth of needle penetration has been investigated on various properties of needlepunched geotextiles. These process parameters are then empirically related...

  10. Empirical estimation of school siting parameter towards improving children's safety

    International Nuclear Information System (INIS)

    Aziz, I S; Yusoff, Z M; Rasam, A R A; Rahman, A N N A; Omar, D

    2014-01-01

    Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation

  11. A comparison between two powder compaction parameters of plasticity: the effective medium A parameter and the Heckel 1/K parameter.

    Science.gov (United States)

    Mahmoodi, Foad; Klevan, Ingvild; Nordström, Josefina; Alderborn, Göran; Frenning, Göran

    2013-09-10

    The purpose of the research was to introduce a procedure to derive a powder compression parameter (EM A) representing particle yield stress using an effective medium equation and to compare the EM A parameter with the Heckel compression parameter (1/K). 16 pharmaceutical powders, including drugs and excipients, were compressed in a materials testing instrument and powder compression profiles were derived using the EM and Heckel equations. The compression profiles thus obtained could be sub-divided into regions among which one region was approximately linear and from this region, the compression parameters EM A and 1/K were calculated. A linear relationship between the EM A parameter and the 1/K parameter was obtained with a strong correlation. The slope of the plot was close to 1 (0.84) and the intercept of the plot was small in comparison to the range of parameter values obtained. The relationship between the theoretical EM A parameter and the 1/K parameter supports the interpretation of the empirical Heckel parameter as being a measure of yield stress. It is concluded that the combination of Heckel and EM equations represents a suitable procedure to derive a value of particle plasticity from powder compression data. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Constitutive Equation with Varying Parameters for Superplastic Flow Behavior

    Science.gov (United States)

    Guan, Zhiping; Ren, Mingwen; Jia, Hongjie; Zhao, Po; Ma, Pinkui

    2014-03-01

    In this study, constitutive equations for superplastic materials with an extra large elongation were investigated through mechanical analysis. From the view of phenomenology, firstly, some traditional empirical constitutive relations were standardized by restricting some strain paths and parameter conditions, and the coefficients in these relations were strictly given new mechanical definitions. Subsequently, a new, general constitutive equation with varying parameters was theoretically deduced based on the general mechanical equation of state. The superplastic tension test data of Zn-5%Al alloy at 340 °C under strain rates, velocities, and loads were employed for building a new constitutive equation and examining its validity. Analysis results indicated that the constitutive equation with varying parameters could characterize superplastic flow behavior in practical superplastic forming with high prediction accuracy and without any restriction of strain path or deformation condition, showing good industrial or scientific interest. On the contrary, those empirical equations have low prediction capabilities due to constant parameters and poor applicability because of the limit of special strain path or parameter conditions based on strict phenomenology.

  13. Compton scattering study of electron momentum distribution in lithium fluoride using 662 keV gamma radiations

    Science.gov (United States)

    Vijayakumar, R.; Shivaramu; Ramamurthy, N.; Ford, M. J.

    2008-12-01

    Here we report the first ever 137Cs Compton spectroscopy study of lithium fluoride. The spherical average Compton profiles of lithium fluoride are deduced from Compton scattering measurements on poly crystalline sample at gamma ray energy of 662 keV. To compare the experimental data, we have computed the spherical average Compton profiles using self-consistent Hartree-Fock wave functions employed on linear combination of atomic orbital (HF-LCAO) approximation. The directional Compton profiles and their anisotropic effects are also calculated using the same HF-LCAO approximation. The experimental spherical average profiles are found to be in good agreement with the corresponding HF-LCAO calculations and in qualitative agreement with Hartree-Fock free atom values. The present experimental isotropic and calculated directional profiles are also compared with the available experimental isotropic and directional Compton profiles using 59.54 and 159 keV γ-rays.

  14. Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model

    International Nuclear Information System (INIS)

    Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Wang, Wan-Tsang; Hsu, Yu-Chi; Wu, Chieh-Lung; Gau, Ming-Hong; Chen, Chun-Nan; Ren, Chung-Yuan; Lee, Meng-En

    2012-01-01

    We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion over k-vector at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.

  15. Empirical psychology, common sense, and Kant's empirical markers for moral responsibility.

    Science.gov (United States)

    Frierson, Patrick

    2008-12-01

    This paper explains the empirical markers by which Kant thinks that one can identify moral responsibility. After explaining the problem of discerning such markers within a Kantian framework I briefly explain Kant's empirical psychology. I then argue that Kant's empirical markers for moral responsibility--linked to higher faculties of cognition--are not sufficient conditions for moral responsibility, primarily because they are empirical characteristics subject to natural laws. Next. I argue that these markers are not necessary conditions of moral responsibility. Given Kant's transcendental idealism, even an entity that lacks these markers could be free and morally responsible, although as a matter of fact Kant thinks that none are. Given that they are neither necessary nor sufficient conditions, I discuss the status of Kant's claim that higher faculties are empirical markers of moral responsibility. Drawing on connections between Kant's ethical theory and 'common rational cognition' (4:393), I suggest that Kant's theory of empirical markers can be traced to ordinary common sense beliefs about responsibility. This suggestion helps explain both why empirical markers are important and what the limits of empirical psychology are within Kant's account of moral responsibility.

  16. A Socio-Cultural Model Based on Empirical Data of Cultural and Social Relationship

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to integrate culture and social relationship as a computational term in an embodied conversational agent system by employing empirical and theoretical approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific cultures...... in different social relationship. So, first, we introduce the theories of social and cultural characteristics. Then, we did corpus analysis of human interaction of two cultures in two different social situations and extracted empirical data and finally, by integrating socio-cultural characteristics...... with empirical data, we establish a parameterized network model that generates culture specific non-verbal expressions in different social relationships....

  17. The Use of Asymptotic Functions for Determining Empirical Values of CN Parameter in Selected Catchments of Variable Land Cover

    Science.gov (United States)

    Wałęga, Andrzej; Młyński, Dariusz; Wachulec, Katarzyna

    2017-12-01

    The aim of the study was to assess the applicability of asymptotic functions for determining the value of CN parameter as a function of precipitation depth in mountain and upland catchments. The analyses were carried out in two catchments: the Rudawa, left tributary of the Vistula, and the Kamienica, right tributary of the Dunajec. The input material included data on precipitation and flows for a multi-year period 1980-2012, obtained from IMGW PIB in Warsaw. Two models were used to determine empirical values of CNobs parameter as a function of precipitation depth: standard Hawkins model and 2-CN model allowing for a heterogeneous nature of a catchment area. The study analyses confirmed that asymptotic functions properly described P-CNobs relationship for the entire range of precipitation variability. In the case of high rainfalls, CNobs remained above or below the commonly accepted average antecedent moisture conditions AMCII. The study calculations indicated that the runoff amount calculated according to the original SCS-CN method might be underestimated, and this could adversely affect the values of design flows required for the design of hydraulic engineering projects. In catchments with heterogeneous land cover, the results of CNobs were more accurate when 2-CN model was used instead of the standard Hawkins model. 2-CN model is more precise in accounting for differences in runoff formation depending on retention capacity of the substrate. It was also demonstrated that the commonly accepted initial abstraction coefficient λ = 0.20 yielded too big initial loss of precipitation in the analyzed catchments and, therefore, the computed direct runoff was underestimated. The best results were obtained for λ = 0.05.

  18. Sea Surface Height Determination In The Arctic Using Cryosat-2 SAR Data From Primary Peak Empirical Retrackers

    DEFF Research Database (Denmark)

    Jain, Maulik; Andersen, Ole Baltazar; Dall, Jørgen

    2015-01-01

    extraction. The primary peak retrackers involve the application of retracking algorithms on just the primary peak of the waveform instead of the complete reflected waveform. These primary peak empirical retrackers are developed for Cryosat-2 SAR data. This is the first time SAR data in the Arctic...... and five parameter beta retrackers. In the case of SAR-lead data, it is concluded that the proposed primary peak retrackers work better as compared with the traditional retrackers (OCOG, threshold, five parameter beta) as well as the ESA Retracker.......SAR waveforms from Cryosat-2 are processed using primary peak empirical retrackers to determine the sea surface height in the Arctic. The empirical retrackers investigated are based on the combination of the traditional OCOG (Offset Center of Gravity) and threshold methods with primary peak...

  19. Development of an empirical correlation for combustion durations in spark ignition engines

    International Nuclear Information System (INIS)

    Bayraktar, Hakan; Durgun, Orhan

    2004-01-01

    Development of an empirical correlation for combustion duration is presented. For this purpose, the effects of variations in compression ratio engine speed, fuel/air equivalence ratio and spark advance on combustion duration have been determined by means of a quasi-dimensional SI engine cycle model previously developed by the authors. Burn durations at several engine operating conditions were calculated from the turbulent combustion model. Variations of combustion duration with each operating parameter obtained from the theoretical results were expressed by second degree polynomial functions. By using these functions, a general empirical correlation for the burn duration has been developed. In this correlation, the effects of engine operating parameters on combustion duration were taken into account. Combustion durations predicted by means of this correlation are in good agreement with those obtained from experimental studies and a detailed combustion model

  20. WIPP Compliance Certification Application calculations parameters. Part 1: Parameter development

    International Nuclear Information System (INIS)

    Howarth, S.M.

    1997-01-01

    The Waste Isolation Pilot Plant (WIPP) in southeast New Mexico has been studied as a transuranic waste repository for the past 23 years. During this time, an extensive site characterization, design, construction, and experimental program was completed, which provided in-depth understanding of the dominant processes that are most likely to influence the containment of radionuclides for 10,000 years. Nearly 1,500 parameters were developed using information gathered from this program; the parameters were input to numerical models for WIPP Compliance Certification Application (CCA) Performance Assessment (PA) calculations. The CCA probabilistic codes frequently require input values that define a statistical distribution for each parameter. Developing parameter distributions begins with the assignment of an appropriate distribution type, which is dependent on the type, magnitude, and volume of data or information available. The development of the parameter distribution values may require interpretation or statistical analysis of raw data, combining raw data with literature values, scaling of lab or field data to fit code grid mesh sizes, or other transformation. Parameter development and documentation of the development process were very complicated, especially for those parameters based on empirical data; they required the integration of information from Sandia National Laboratories (SNL) code sponsors, parameter task leaders (PTLs), performance assessment analysts (PAAs), and experimental principal investigators (PIs). This paper, Part 1 of two parts, contains a discussion of the parameter development process, roles and responsibilities, and lessons learned. Part 2 will discuss parameter documentation, traceability and retrievability, and lessons learned from related audits and reviews

  1. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  2. Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters

    Science.gov (United States)

    Selyutina, N. S.; Petrov, Yu. V.

    2018-02-01

    The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.

  3. Inglorious Empire

    DEFF Research Database (Denmark)

    Khair, Tabish

    2017-01-01

    Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00......Review of 'Inglorious Empire: What the British did to India' by Shashi Tharoor, London, Hurst Publishers, 2017, 296 pp., £20.00...

  4. Empirical Hamiltonians

    International Nuclear Information System (INIS)

    Peggs, S.; Talman, R.

    1986-08-01

    As proton accelerators get larger, and include more magnets, the conventional tracking programs which simulate them run slower. At the same time, in order to more carefully optimize the higher cost of the accelerators, they must return more accurate results, even in the presence of a longer list of realistic effects, such as magnet errors and misalignments. For these reasons conventional tracking programs continue to be computationally bound, despite the continually increasing computing power available. This limitation is especially severe for a class of problems in which some lattice parameter is slowly varying, when a faithful description is only obtained by tracking for an exceedingly large number of turns. Examples are synchrotron oscillations in which the energy varies slowly with a period of, say, hundreds of turns, or magnet ripple or noise on a comparably slow time scale. In these cases one may with to track for hundreds of periods of the slowly varying parameter. The purpose of this paper is to describe a method, still under development, in which element-by-element tracking around one turn is replaced by a single map, which can be processed far faster. Similar programs have already been written in which successive elements are ''concatenated'' with truncation to linear, sextupole, or octupole order, et cetera, using Lie algebraic techniques to preserve symplecticity. The method described here is rather more empirical than this but, in principle, contains information to all orders and is able to handle resonances in a more straightforward fashion

  5. Empirical Validation of a Thermal Model of a Complex Roof Including Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Stéphane Guichard

    2015-12-01

    Full Text Available This paper deals with the empirical validation of a building thermal model of a complex roof including a phase change material (PCM. A mathematical model dedicated to PCMs based on the heat apparent capacity method was implemented in a multi-zone building simulation code, the aim being to increase the understanding of the thermal behavior of the whole building with PCM technologies. In order to empirically validate the model, the methodology is based both on numerical and experimental studies. A parametric sensitivity analysis was performed and a set of parameters of the thermal model has been identified for optimization. The use of the generic optimization program called GenOpt® coupled to the building simulation code enabled to determine the set of adequate parameters. We first present the empirical validation methodology and main results of previous work. We then give an overview of GenOpt® and its coupling with the building simulation code. Finally, once the optimization results are obtained, comparisons of the thermal predictions with measurements are found to be acceptable and are presented.

  6. Corrosion-induced bond strength degradation in reinforced concrete-Analytical and empirical models

    International Nuclear Information System (INIS)

    Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.

    2007-01-01

    The present paper aims to investigate the relationship between the bond strength and the reinforcement corrosion in reinforced concrete (RC). Analytical and empirical models are proposed for the bond strength of corroded reinforcing bars. Analytical model proposed by Cairns.and Abdullah [Cairns, J., Abdullah, R.B., 1996. Bond strength of black and epoxy-coated reinforcement-a theoretical approach. ACI Mater. J. 93 (4), 362-369] for splitting bond failure and later modified by Coronelli [Coronelli, D. 2002. Corrosion cracking and bond strength modeling for corroded bars in reinforced concrete. ACI Struct. J. 99 (3), 267-276] to consider the corroded bars, has been adopted. Estimation of the various parameters in the earlier analytical model has been proposed by the present authors. These parameters include corrosion pressure due to expansive action of corrosion products, modeling of tensile behaviour of cracked concrete and adhesion and friction coefficient between the corroded bar and cracked concrete. Simple empirical models are also proposed to evaluate the reduction in bond strength as a function of reinforcement corrosion in RC specimens. These empirical models are proposed by considering a wide range of published experimental investigations related to the bond degradation in RC specimens due to reinforcement corrosion. It has been found that the proposed analytical and empirical bond models are capable of providing the estimates of predicted bond strength of corroded reinforcement that are in reasonably good agreement with the experimentally observed values and with those of the other reported published data on analytical and empirical predictions. An attempt has also been made to evaluate the flexural strength of RC beams with corroded reinforcement failing in bond. It has also been found that the analytical predictions for the flexural strength of RC beams based on the proposed bond degradation models are in agreement with those of the experimentally

  7. A nonparametric empirical Bayes framework for large-scale multiple testing.

    Science.gov (United States)

    Martin, Ryan; Tokdar, Surya T

    2012-07-01

    We propose a flexible and identifiable version of the 2-groups model, motivated by hierarchical Bayes considerations, that features an empirical null and a semiparametric mixture model for the nonnull cases. We use a computationally efficient predictive recursion (PR) marginal likelihood procedure to estimate the model parameters, even the nonparametric mixing distribution. This leads to a nonparametric empirical Bayes testing procedure, which we call PRtest, based on thresholding the estimated local false discovery rates. Simulations and real data examples demonstrate that, compared to existing approaches, PRtest's careful handling of the nonnull density can give a much better fit in the tails of the mixture distribution which, in turn, can lead to more realistic conclusions.

  8. Extended Analysis of Empirical Citations with Skinner's "Verbal Behavior": 1984-2004

    Science.gov (United States)

    Dixon, Mark R.; Small, Stacey L.; Rosales, Rocio

    2007-01-01

    The present paper comments on and extends the citation analysis of verbal operant publications based on Skinner's "Verbal Behavior" (1957) by Dymond, O'Hora, Whelan, and O'Donovan (2006). Variations in population parameters were evaluated for only those studies that Dymond et al. categorized as empirical. Preliminary results indicate that the…

  9. Electronic structure and electron momentum density in TiSi

    Energy Technology Data Exchange (ETDEWEB)

    Ghaleb, A.M. [Department of Physics, College of Science, University of Kirkuk, Kirkuk (Iraq); Mohammad, F.M. [Department of Physics, College of Science, University of Tikreet, Tikreet (Iraq); Sahariya, Jagrati [Department of Physics, University College of Science, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India); Sharma, Mukesh [Physics Division, Forensic Science Laboratory, Jaipur, Rajasthan (India); Ahuja, B.L., E-mail: blahuja@yahoo.com [Department of Physics, University College of Science, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India)

    2013-03-01

    We report the electron momentum density in titanium monosilicide using {sup 241}Am Compton spectrometer. Experimental Compton profile has been compared with the theoretical profiles computed using linear combination of atomic orbitals (LCAO). The energy bands, density of states and Fermi surface structures of TiSi are reported using the LCAO and the full potential linearized augmented plane wave methods. Theoretical anisotropies in directional Compton profiles are interpreted in terms of energy bands. To confirm the conducting behavior, we also report the real space analysis of experimental Compton profile of TiSi.

  10. Generalized empirical equation for the extrapolated range of electrons in elemental and compound materials

    International Nuclear Information System (INIS)

    Lima, W. de; Poli CR, D. de

    1999-01-01

    The extrapolated range R ex of electrons is useful for various purposes in research and in the application of electrons, for example, in polymer modification, electron energy determination and estimation of effects associated with deep penetration of electrons. A number of works have used empirical equations to express the extrapolated range for some elements. In this work a generalized empirical equation, very simple and accurate, in the energy region 0.3 keV - 50 MeV is proposed. The extrapolated range for elements, in organic or inorganic molecules and compound materials, can be well expressed as a function of the atomic number Z or two empirical parameters Zm for molecules and Zc for compound materials instead of Z. (author)

  11. Verification of supersonic and hypersonic semi-empirical predictions using CFD

    International Nuclear Information System (INIS)

    McIlwain, S.; Khalid, M.

    2004-01-01

    CFD was used to verify the accuracy of the axial force, normal force, and pitching moment predictions of two semi-empirical codes. This analysis considered the flow around the forebody of four different aerodynamic shapes. These included geometries with equal-volume straight or tapered bodies, with either standard or double-angle nose cones. The flow was tested at freestream Mach numbers of M = 1.5, 4.0, and 7.0. The CFD results gave the expected flow pressure contours for each geometry. The geometries with straight bodies produced larger axial forces, smaller normal forces, and larger pitching moments compared to the geometries with tapered bodies. The double-angle nose cones introduced a shock into the flow, but affected the straight-body geometries more than the tapered-body geometries. Both semi-empirical codes predicted axial forces that were consistent with the CFD data. The agreement between the normal forces and pitching moments was not as good, particularly for the straight-body geometries. But even though the semi-empirical results were not exactly the same as the CFD data, the semi-empirical codes provided rough estimates of the aerodynamic parameters in a fraction of the time required to perform a CFD analysis. (author)

  12. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics.

    Science.gov (United States)

    De Vries, Martine; Van Leeuwen, Evert

    2010-11-01

    In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.

  13. Empirical research in medical ethics: How conceptual accounts on normative-empirical collaboration may improve research practice

    Science.gov (United States)

    2012-01-01

    Background The methodology of medical ethics during the last few decades has shifted from a predominant use of normative-philosophical analyses to an increasing involvement of empirical methods. The articles which have been published in the course of this so-called 'empirical turn' can be divided into conceptual accounts of empirical-normative collaboration and studies which use socio-empirical methods to investigate ethically relevant issues in concrete social contexts. Discussion A considered reference to normative research questions can be expected from good quality empirical research in medical ethics. However, a significant proportion of empirical studies currently published in medical ethics lacks such linkage between the empirical research and the normative analysis. In the first part of this paper, we will outline two typical shortcomings of empirical studies in medical ethics with regard to a link between normative questions and empirical data: (1) The complete lack of normative analysis, and (2) cryptonormativity and a missing account with regard to the relationship between 'is' and 'ought' statements. Subsequently, two selected concepts of empirical-normative collaboration will be presented and how these concepts may contribute to improve the linkage between normative and empirical aspects of empirical research in medical ethics will be demonstrated. Based on our analysis, as well as our own practical experience with empirical research in medical ethics, we conclude with a sketch of concrete suggestions for the conduct of empirical research in medical ethics. Summary High quality empirical research in medical ethics is in need of a considered reference to normative analysis. In this paper, we demonstrate how conceptual approaches of empirical-normative collaboration can enhance empirical research in medical ethics with regard to the link between empirical research and normative analysis. PMID:22500496

  14. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    Science.gov (United States)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  15. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Roeshoff, Kennert; Lanaro, Flavio [Berg Bygg Konsult AB, Stockholm (Sweden); Lanru Jing [Royal Inst. of Techn., Stockholm (Sweden). Div. of Engineering Geology

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved

  16. Strategy for a Rock Mechanics Site Descriptive Model. Development and testing of the empirical approach

    International Nuclear Information System (INIS)

    Roeshoff, Kennert; Lanaro, Flavio; Lanru Jing

    2002-05-01

    This report presents the results of one part of a wide project for the determination of a methodology for the determination of the rock mechanics properties of the rock mass for the so-called Aespoe Test Case. The Project consists of three major parts: the empirical part dealing with the characterisation of the rock mass by applying empirical methods, a part determining the rock mechanics properties of the rock mass through numerical modelling, and a third part carrying out numerical modelling for the determination of the stress state at Aespoe. All Project's parts were performed based on a limited amount of data about the geology and mechanical tests on samples selected from the Aespoe Database. This Report only considers the empirical approach. The purpose of the project is the development of a descriptive rock mechanics model for SKBs rock mass investigations for a final repository site. The empirical characterisation of the rock mass provides correlations with some of the rock mechanics properties of the rock mass such as the deformation modulus, the friction angle and cohesion for a certain stress interval and the uniaxial compressive strength. For the characterisation of the rock mass, several empirical methods were analysed and reviewed. Among those methods, some were chosen because robust, applicable and widespread in modern rock mechanics. Major weight was given to the well-known Tunnel Quality Index (Q) and Rock Mass Rating (RMR) but also the Rock Mass Index (RMi), the Geological Strength Index (GSI) and Ramamurthy's Criterion were applied for comparison with the two classical methods. The process of: i) sorting the geometrical/geological/rock mechanics data, ii) identifying homogeneous rock volumes, iii) determining the input parameters for the empirical ratings for rock mass characterisation; iv) evaluating the mechanical properties by using empirical relations with the rock mass ratings; was considered. By comparing the methodologies involved by the

  17. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  18. Compton profiles and Mulliken’s populations of cobalt, nickel and copper tungstates: Experiment and theory

    Energy Technology Data Exchange (ETDEWEB)

    Meena, B.S. [Department of Physics, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India); Heda, N.L. [Department of Pure and Applied Physics, University of Kota, Kota 324010, Rajasthan (India); Kumar, Kishor; Bhatt, Samir; Mund, H.S. [Department of Physics, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India); Ahuja, B.L., E-mail: blahuja@yahoo.com [Department of Physics, M.L. Sukhadia University, Udaipur 313001, Rajasthan (India)

    2016-03-01

    We present the first ever studies on Compton profiles of AWO{sub 4} (A=Co, Ni and Cu) using 661.65 keV γ-rays emitted by {sup 137}Cs source. The experimental momentum densities have been employed to validate exchange and correlation potentials within linear combination of atomic orbitals (LCAO) method. Density functional theory (DFT) with local density approximation and generalized gradient approximation and also the hybridization of Hartree-Fock and DFT (B3LYP and PBE0) have been considered under LCAO scheme. The LCAO-B3LYP scheme is found to be in better agreement with the experimental data than other approximations considered in this work, suggesting applicability of B3LYP approach in predicting the electronic properties of these tungstates. The Mulliken’s population (MP) data show charge transfer from Co/Ni/Cu and W to O atoms. The experimental profiles when normalized to same area show almost similar localization of 3d electrons (in real space) of Ni and Cu which is lower than that of Co in their AWO{sub 4} environment.

  19. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  20. Seven-parameter statistical model for BRDF in the UV band.

    Science.gov (United States)

    Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua

    2012-05-21

    A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.

  1. Prognostic value of pre-treatment DCE-MRI parameters in predicting disease free and overall survival for breast cancer patients undergoing neoadjuvant chemotherapy

    International Nuclear Information System (INIS)

    Pickles, Martin D.; Manton, David J.; Lowry, Martin; Turnbull, Lindsay W.

    2009-01-01

    The purpose of this study was to investigate whether dynamic contrast enhanced MRI (DCE-MRI) data, both pharmacokinetic and empirical, can predict, prior to neoadjuvant chemotherapy, which patients are likely to have a shorter disease free survival (DFS) and overall survival (OS) interval following surgery. Traditional prognostic parameters were also included in the survival analysis. Consequently, a comparison of the prognostic value could be made between all the parameters studied. MR examinations were conducted on a 1.5 T system in 68 patients prior to the initiation of neoadjuvant chemotherapy. DCE-MRI consisted of a fast spoiled gradient echo sequence acquired over 35 phases with a mean temporal resolution of 11.3 s. Both pharmacokinetic and empirical parameters were derived from the DCE-MRI data. Kaplan-Meier survival plots were generated for each parameter and group comparisons were made utilising logrank tests. The results from the 54 patients entered into the univariate survival analysis demonstrated that traditional prognostic parameters (tumour grade, hormonal status and size), empirical parameters (maximum enhancement index, enhancement index at 30 s, area under the curve and initial slope) and adjuvant therapies demonstrated significant differences in survival intervals. Further multivariate Cox regression survival analysis revealed that empirical enhancement parameters contributed the greatest prediction of both DFS and OS in the resulting models. In conclusion, this study has demonstrated that in patients who exhibit high levels of perfusion and vessel permeability pre-treatment, evidenced by elevated empirical DCE-MRI parameters, a significantly lower disease free survival and overall survival can be expected.

  2. Prognostic value of pre-treatment DCE-MRI parameters in predicting disease free and overall survival for breast cancer patients undergoing neoadjuvant chemotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Pickles, Martin D. [Centre for Magnetic Resonance Investigations, Division of Cancer, Postgraduate Medical School, University of Hull, Hull Royal Infirmary, Anlaby Road, Hull, HU3 2JZ (United Kingdom)], E-mail: m.pickles@hull.ac.uk; Manton, David J. [Centre for Magnetic Resonance Investigations, Division of Cancer, Postgraduate Medical School, University of Hull, Hull Royal Infirmary, Anlaby Road, Hull, HU3 2JZ (United Kingdom)], E-mail: d.j.manton@hull.ac.uk; Lowry, Martin [Centre for Magnetic Resonance Investigations, Division of Cancer, Postgraduate Medical School, University of Hull, Hull Royal Infirmary, Anlaby Road, Hull, HU3 2JZ (United Kingdom)], E-mail: m.lowry@hull.ac.uk; Turnbull, Lindsay W. [Centre for Magnetic Resonance Investigations, Division of Cancer, Postgraduate Medical School, University of Hull, Hull Royal Infirmary, Anlaby Road, Hull, HU3 2JZ (United Kingdom)], E-mail: l.w.turnbull@hull.ac.uk

    2009-09-15

    The purpose of this study was to investigate whether dynamic contrast enhanced MRI (DCE-MRI) data, both pharmacokinetic and empirical, can predict, prior to neoadjuvant chemotherapy, which patients are likely to have a shorter disease free survival (DFS) and overall survival (OS) interval following surgery. Traditional prognostic parameters were also included in the survival analysis. Consequently, a comparison of the prognostic value could be made between all the parameters studied. MR examinations were conducted on a 1.5 T system in 68 patients prior to the initiation of neoadjuvant chemotherapy. DCE-MRI consisted of a fast spoiled gradient echo sequence acquired over 35 phases with a mean temporal resolution of 11.3 s. Both pharmacokinetic and empirical parameters were derived from the DCE-MRI data. Kaplan-Meier survival plots were generated for each parameter and group comparisons were made utilising logrank tests. The results from the 54 patients entered into the univariate survival analysis demonstrated that traditional prognostic parameters (tumour grade, hormonal status and size), empirical parameters (maximum enhancement index, enhancement index at 30 s, area under the curve and initial slope) and adjuvant therapies demonstrated significant differences in survival intervals. Further multivariate Cox regression survival analysis revealed that empirical enhancement parameters contributed the greatest prediction of both DFS and OS in the resulting models. In conclusion, this study has demonstrated that in patients who exhibit high levels of perfusion and vessel permeability pre-treatment, evidenced by elevated empirical DCE-MRI parameters, a significantly lower disease free survival and overall survival can be expected.

  3. Empirical Philosophy of Science

    DEFF Research Database (Denmark)

    Mansnerus, Erika; Wagenknecht, Susann

    2015-01-01

    knowledge takes place through the integration of the empirical or historical research into the philosophical studies, as Chang, Nersessian, Thagard and Schickore argue in their work. Building upon their contributions we will develop a blueprint for an Empirical Philosophy of Science that draws upon...... qualitative methods from the social sciences in order to advance our philosophical understanding of science in practice. We will regard the relationship between philosophical conceptualization and empirical data as an iterative dialogue between theory and data, which is guided by a particular ‘feeling with......Empirical insights are proven fruitful for the advancement of Philosophy of Science, but the integration of philosophical concepts and empirical data poses considerable methodological challenges. Debates in Integrated History and Philosophy of Science suggest that the advancement of philosophical...

  4. An Empirical Mass Function Distribution

    Science.gov (United States)

    Murray, S. G.; Robotham, A. S. G.; Power, C.

    2018-03-01

    The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.

  5. What 'empirical turn in bioethics'?

    Science.gov (United States)

    Hurst, Samia

    2010-10-01

    Uncertainty as to how we should articulate empirical data and normative reasoning seems to underlie most difficulties regarding the 'empirical turn' in bioethics. This article examines three different ways in which we could understand 'empirical turn'. Using real facts in normative reasoning is trivial and would not represent a 'turn'. Becoming an empirical discipline through a shift to the social and neurosciences would be a turn away from normative thinking, which we should not take. Conducting empirical research to inform normative reasoning is the usual meaning given to the term 'empirical turn'. In this sense, however, the turn is incomplete. Bioethics has imported methodological tools from empirical disciplines, but too often it has not imported the standards to which researchers in these disciplines are held. Integrating empirical and normative approaches also represents true added difficulties. Addressing these issues from the standpoint of debates on the fact-value distinction can cloud very real methodological concerns by displacing the debate to a level of abstraction where they need not be apparent. Ideally, empirical research in bioethics should meet standards for empirical and normative validity similar to those used in the source disciplines for these methods, and articulate these aspects clearly and appropriately. More modestly, criteria to ensure that none of these standards are completely left aside would improve the quality of empirical bioethics research and partly clear the air of critiques addressing its theoretical justification, when its rigour in the particularly difficult context of interdisciplinarity is what should be at stake.

  6. Optimal design criteria - prediction vs. parameter estimation

    Science.gov (United States)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  7. Blast wave parameters at diminished ambient pressure

    Science.gov (United States)

    Silnikov, M. V.; Chernyshov, M. V.; Mikhaylin, A. I.

    2015-04-01

    Relation between blast wave parameters resulted from a condensed high explosive (HE) charge detonation and a surrounding gas (air) pressure has been studied. Blast wave pressure and impulse differences at compression and rarefaction phases, which traditionally determine damage explosive effect, has been analyzed. An initial pressure effect on a post-explosion quasi-static component of the blast load has been investigated. The analysis is based on empirical relations between blast parameters and non-dimensional similarity criteria. The results can be directly applied to flying vehicle (aircraft or spacecraft) blast safety analysis.

  8. An empirical algorithm to estimate spectral average cosine of underwater light field from remote sensing data in coastal oceanic waters.

    Digital Repository Service at National Institute of Oceanography (India)

    Talaulika, M.; Suresh, T.; Desa, E.S.; Inamdar, A.

    parameters from the coastal waters off Goa, India, and eastern Arabian Sea and the optical parameters derived using the radiative transfer code using these measured data. The algorithm was compared with two earlier reported empirical algorithms of Haltrin...

  9. An Empirical Method for Particle Damping Design

    Directory of Open Access Journals (Sweden)

    Zhi Wei Xu

    2004-01-01

    Full Text Available Particle damping is an effective vibration suppression method. The purpose of this paper is to develop an empirical method for particle damping design based on extensive experiments on three structural objects – steel beam, bond arm and bond head stand. The relationships among several key parameters of structure/particles are obtained. Then the procedures with the use of particle damping are proposed to provide guidelines for practical applications. It is believed that the results presented in this paper would be helpful to effectively implement the particle damping for various structural systems for the purpose of vibration suppression.

  10. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  11. Empirical Equation Based Chirality (n, m Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    Directory of Open Access Journals (Sweden)

    Md Shamsul Arefin

    2012-12-01

    Full Text Available This work presents a technique for the chirality (n, m assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n, m with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot.

  12. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    Science.gov (United States)

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319

  13. An update on the "empirical turn" in bioethics: analysis of empirical research in nine bioethics journals.

    Science.gov (United States)

    Wangmo, Tenzin; Hauri, Sirin; Gennet, Eloise; Anane-Sarpong, Evelyn; Provoost, Veerle; Elger, Bernice S

    2018-02-07

    A review of literature published a decade ago noted a significant increase in empirical papers across nine bioethics journals. This study provides an update on the presence of empirical papers in the same nine journals. It first evaluates whether the empirical trend is continuing as noted in the previous study, and second, how it is changing, that is, what are the characteristics of the empirical works published in these nine bioethics journals. A review of the same nine journals (Bioethics; Journal of Medical Ethics; Journal of Clinical Ethics; Nursing Ethics; Cambridge Quarterly of Healthcare Ethics; Hastings Center Report; Theoretical Medicine and Bioethics; Christian Bioethics; and Kennedy Institute of Ethics Journal) was conducted for a 12-year period from 2004 to 2015. Data obtained was analysed descriptively and using a non-parametric Chi-square test. Of the total number of original papers (N = 5567) published in the nine bioethics journals, 18.1% (n = 1007) collected and analysed empirical data. Journal of Medical Ethics and Nursing Ethics led the empirical publications, accounting for 89.4% of all empirical papers. The former published significantly more quantitative papers than qualitative, whereas the latter published more qualitative papers. Our analysis reveals no significant difference (χ2 = 2.857; p = 0.091) between the proportion of empirical papers published in 2004-2009 and 2010-2015. However, the increasing empirical trend has continued in these journals with the proportion of empirical papers increasing from 14.9% in 2004 to 17.8% in 2015. This study presents the current state of affairs regarding empirical research published nine bioethics journals. In the quarter century of data that is available about the nine bioethics journals studied in two reviews, the proportion of empirical publications continues to increase, signifying a trend towards empirical research in bioethics. The growing volume is mainly attributable to two

  14. Hardrock Elastic Physical Properties: Birch's Seismic Parameter Revisited

    Science.gov (United States)

    Wu, M.; Milkereit, B.

    2014-12-01

    Identifying rock composition and properties is imperative in a variety of fields including geotechnical engineering, mining, and petroleum exploration, in order to accurately make any petrophysical calculations. Density is, in particular, an important parameter that allows us to differentiate between lithologies and estimate or calculate other petrophysical properties. It is well established that compressional and shear wave velocities of common crystalline rocks increase with increasing densities (i.e. the Birch and Nafe-Drake relationships). Conventional empirical relations do not take into account S-wave velocity. Physical properties of Fe-oxides and massive sulfides, however, differ significantly from the empirical velocity-density relationships. Currently, acquiring in-situ density data is challenging and problematic, and therefore, developing an approximation for density based on seismic wave velocity and elastic moduli would be beneficial. With the goal of finding other possible or better relationships between density and the elastic moduli, a database of density, P-wave velocity, S-wave velocity, bulk modulus, shear modulus, Young's modulus, and Poisson's ratio was compiled based on a multitude of lab samples. The database is comprised of isotropic, non-porous metamorphic rock. Multi-parameter cross plots of the various elastic parameters have been analyzed in order to find a suitable parameter combination that reduces high density outliers. As expected, the P-wave velocity to S-wave velocity ratios show no correlation with density. However, Birch's seismic parameter, along with the bulk modulus, shows promise in providing a link between observed compressional and shear wave velocities and rock densities, including massive sulfides and Fe-oxides.

  15. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  16. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  17. HEDL empirical correlation of fuel pin top failure thresholds, status 1976

    International Nuclear Information System (INIS)

    Baars, R.E.

    1976-01-01

    The Damage Parameter (DP) empirical correlation of fuel pin cladding failure thresholds for TOP events has been revised and recorrelated to the results of twelve TREAT tests. The revised correlation, called the Failure Potential (FP) correlation, predicts failure times for the tests in the data base with an average error of 35 ms for $3/s tests and of 150 ms for 50 cents/s tests

  18. Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. I....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases....

  19. Estimation of Aboveground Biomass in Alpine Forests: A Semi-Empirical Approach Considering Canopy Transparency Derived from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Martin Rutzinger

    2010-12-01

    Full Text Available In this study, a semi-empirical model that was originally developed for stem volume estimation is used for aboveground biomass (AGB estimation of a spruce dominated alpine forest. The reference AGB of the available sample plots is calculated from forest inventory data by means of biomass expansion factors. Furthermore, the semi-empirical model is extended by three different canopy transparency parameters derived from airborne LiDAR data. These parameters have not been considered for stem volume estimation until now and are introduced in order to investigate the behavior of the model concerning AGB estimation. The developed additional input parameters are based on the assumption that transparency of vegetation can be measured by determining the penetration of the laser beams through the canopy. These parameters are calculated for every single point within the 3D point cloud in order to consider the varying properties of the vegetation in an appropriate way. Exploratory Data Analysis (EDA is performed to evaluate the influence of the additional LiDAR derived canopy transparency parameters for AGB estimation. The study is carried out in a 560 km2 alpine area in Austria, where reference forest inventory data and LiDAR data are available. The investigations show that the introduction of the canopy transparency parameters does not change the results significantly according to R2 (R2 = 0.70 to R2 = 0.71 in comparison to the results derived from, the semi-empirical model, which was originally developed for stem volume estimation.

  20. Optimal parameters for the FFA-Beddoes dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerck, A; Mert, M [FFA, The Aeronautical Research Institute of Sweden, Bromma (Sweden); Madsen, H A [Risoe National Lab., Roskilde (Denmark)

    1999-03-01

    Unsteady aerodynamic effects, like dynamic stall, must be considered in calculation of dynamic forces for wind turbines. Models incorporated in aero-elastic programs are of semi-empirical nature. Resulting aerodynamic forces therefore depend on values used for the semi-empiricial parameters. In this paper a study of finding appropriate parameters to use with the Beddoes-Leishman model is discussed. Minimisation of the `tracking error` between results from 2D wind tunnel tests and simulation with the model is used to find optimum values for the parameters. The resulting optimum parameters show a large variation from case to case. Using these different sets of optimum parameters in the calculation of blade vibrations, give rise to quite different predictions of aerodynamic damping which is discussed. (au)

  1. Spillover effects in epidemiology: parameters, study designs and methodological considerations

    Science.gov (United States)

    Benjamin-Chung, Jade; Arnold, Benjamin F; Berger, David; Luby, Stephen P; Miguel, Edward; Colford Jr, John M; Hubbard, Alan E

    2018-01-01

    Abstract Many public health interventions provide benefits that extend beyond their direct recipients and impact people in close physical or social proximity who did not directly receive the intervention themselves. A classic example of this phenomenon is the herd protection provided by many vaccines. If these ‘spillover effects’ (i.e. ‘herd effects’) are present in the same direction as the effects on the intended recipients, studies that only estimate direct effects on recipients will likely underestimate the full public health benefits of the intervention. Causal inference assumptions for spillover parameters have been articulated in the vaccine literature, but many studies measuring spillovers of other types of public health interventions have not drawn upon that literature. In conjunction with a systematic review we conducted of spillovers of public health interventions delivered in low- and middle-income countries, we classified the most widely used spillover parameters reported in the empirical literature into a standard notation. General classes of spillover parameters include: cluster-level spillovers; spillovers conditional on treatment or outcome density, distance or the number of treated social network links; and vaccine efficacy parameters related to spillovers. We draw on high quality empirical examples to illustrate each of these parameters. We describe study designs to estimate spillovers and assumptions required to make causal inferences about spillovers. We aim to advance and encourage methods for spillover estimation and reporting by standardizing spillover parameter nomenclature and articulating the causal inference assumptions required to estimate spillovers. PMID:29106568

  2. An Empirical Fitting Method to Type Ia Supernova Light Curves. III. A Three-parameter Relationship: Peak Magnitude, Rise Time, and Photospheric Velocity

    Science.gov (United States)

    Zheng, WeiKang; Kelly, Patrick L.; Filippenko, Alexei V.

    2018-05-01

    We examine the relationship between three parameters of Type Ia supernovae (SNe Ia): peak magnitude, rise time, and photospheric velocity at the time of peak brightness. The peak magnitude is corrected for extinction using an estimate determined from MLCS2k2 fitting. The rise time is measured from the well-observed B-band light curve with the first detection at least 1 mag fainter than the peak magnitude, and the photospheric velocity is measured from the strong absorption feature of Si II λ6355 at the time of peak brightness. We model the relationship among these three parameters using an expanding fireball with two assumptions: (a) the optical emission is approximately that of a blackbody, and (b) the photospheric temperatures of all SNe Ia are the same at the time of peak brightness. We compare the precision of the distance residuals inferred using this physically motivated model against those from the empirical Phillips relation and the MLCS2k2 method for 47 low-redshift SNe Ia (0.005 Ia in our sample with higher velocities are inferred to be intrinsically fainter. Eliminating the high-velocity SNe and applying a more stringent extinction cut to obtain a “low-v golden sample” of 22 SNe, we obtain significantly reduced scatter of 0.108 ± 0.018 mag in the new relation, better than those of the Phillips relation and the MLCS2k2 method. For 250 km s‑1 of residual peculiar motions, we find 68% and 95% upper limits on the intrinsic scatter of 0.07 and 0.10 mag, respectively.

  3. Empirical Support for Perceptual Conceptualism

    Directory of Open Access Journals (Sweden)

    Nicolás Alejandro Serrano

    2018-03-01

    Full Text Available The main objective of this paper is to show that perceptual conceptualism can be understood as an empirically meaningful position and, furthermore, that there is some degree of empirical support for its main theses. In order to do this, I will start by offering an empirical reading of the conceptualist position, and making three predictions from it. Then, I will consider recent experimental results from cognitive sciences that seem to point towards those predictions. I will conclude that, while the evidence offered by those experiments is far from decisive, it is enough not only to show that conceptualism is an empirically meaningful position but also that there is empirical support for it.

  4. Electron momentum distribution and electronic response of ceramic borides

    Energy Technology Data Exchange (ETDEWEB)

    Heda, N.L. [Department of Pure and Applied Physics, University of Kota, Kota 324005 (India); Meena, B.S.; Mund, H.S. [Department of Physics, Mohanlal Sukhadia University, Udaipur 313001 (India); Sahariya, Jagrati [Department of Physics, Manipal University, Jaipur 303007 (India); Kumar, Kishor [Department of Physics, Mohanlal Sukhadia University, Udaipur 313001 (India); Ahuja, B.L., E-mail: blahuja@yahoo.com [Department of Physics, Mohanlal Sukhadia University, Udaipur 313001 (India)

    2017-03-15

    Isotropic Compton profiles of transition metal based ceramics TaB and VB have been measured using {sup 137}Cs (661.65 keV) γ-ray Compton spectrometer. The experimental momentum densities are compared with those deduced using linear combination of atomic orbitals (LCAO) with Hartree-Fock (HF), density functional theory (DFT) with Wu-Cohen generalized gradient approximation (WCGGA) and also the hybridization of HF and DFT (namely B3PW and PBE0) schemes. It is found that LCAO-DFT-WCGGA scheme based profiles give an overall better agreement with the experimental data, for both the borides. In addition, we have computed the Mulliken's population (MP) charge transfer data, energy bands, density of states and Fermi surface topology of both the borides using full potential-linearized augmented plane wave (FP-LAPW) and LCAO methods with DFT-WCGGA scheme. Cross-overs of Fermi level by the energy bands corresponding to B-2p and valence d-states of transition metals lead to metallic character in both the compounds. Equal-valence-electron-density profiles and MP analysis suggest more ionic character of VB than that of TaB.

  5. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    Science.gov (United States)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  6. UAV-based multi-angular measurements for improved crop parameter retrieval

    NARCIS (Netherlands)

    Roosjen, Peter P.J.

    2017-01-01

    Optical remote sensing enables the estimation of crop parameters based on reflected light through empirical-statistical methods or inversion of radiative transfer models. Natural surfaces, however, reflect light anisotropically, which means that the intensity of reflected light depends on the

  7. Ocean Wave Parameters Retrieval from Sentinel-1 SAR Imagery

    Directory of Open Access Journals (Sweden)

    Weizeng Shao

    2016-08-01

    Full Text Available In this paper, a semi-empirical algorithm for significant wave height (Hs and mean wave period (Tmw retrieval from C-band VV-polarization Sentinel-1 synthetic aperture radar (SAR imagery is presented. We develop a semi-empirical function for Hs retrieval, which describes the relation between Hs and cutoff wavelength, radar incidence angle, and wave propagation direction relative to radar look direction. Additionally, Tmw can be also calculated through Hs and cutoff wavelength by using another empirical function. We collected 106 C-band stripmap mode Sentinel-1 SAR images in VV-polarization and wave measurements from in situ buoys. There are a total of 150 matchup points. We used 93 matchups to tune the coefficients of the semi-empirical algorithm and the rest 57 matchups for validation. The comparison shows a 0.69 m root mean square error (RMSE of Hs with a 18.6% of scatter index (SI and 1.98 s RMSE of Tmw with a 24.8% of SI. Results indicate that the algorithm is suitable for wave parameters retrieval from Sentinel-1 SAR data.

  8. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  9. Optical dielectric function of intrinsic amorphous silicon

    International Nuclear Information System (INIS)

    Ching, W.Y.; Lin, C.C.

    1978-01-01

    The imaginary part of the optical dielectric function epsilon 2 (ω) has been calculated using a continuous-random-tetrahedral network as the structural model for the atomic positions. Here the electronic energies and wave functions are determined by first-principles calculations with the method of linear combinations of atomic orbitals (LCAO), and the momentum matrix elements are evaluated directly from the LCAO wave functions. The calculated dielectric function is in good overall agreement with experiment. At energies within 1 eV above the threshold, the epsilon 2 curve shows some structures that are due to interband transitions between the localized states near the band gap

  10. Electronic structure of Ni{sub 2}TiAl: Theoretical aspects and Compton scattering measurement

    Energy Technology Data Exchange (ETDEWEB)

    Sahariya, Jagrati [Department of Physics, University College of Science, M.L. Sukhadia University, Durga Nursery Road, Udaipur 313001, Rajasthan (India); Ahuja, B.L., E-mail: blahuja@yahoo.com [Department of Physics, University College of Science, M.L. Sukhadia University, Durga Nursery Road, Udaipur 313001, Rajasthan (India)

    2012-11-01

    In this paper, we report electron momentum density of Ni{sub 2}TiAl alloy using an in-house 20 Ci {sup 137}Cs (661.65 keV) Compton spectrometer. The experimental data have been analyzed in terms of energy bands and density of states computed using linear combination of atomic orbitals (LCAO) method. In the LCAO computations, we have considered local density approximation, generalized gradient approximation and recently developed second order generalized gradient approximation within the frame work of density functional theory. Anisotropies in theoretical Compton profiles along [1 0 0], [1 1 0] and [1 1 1] directions are also explained in terms of energy bands.

  11. Research on filter’s parameter selection based on PROMETHEE method

    Science.gov (United States)

    Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan

    2018-03-01

    The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.

  12. Manifestation of interplanetary medium parameters in development of a geomagnetic storm initial phase

    International Nuclear Information System (INIS)

    Chkhetiya, A.M.

    1988-01-01

    The role of solar wind plasma parameters in formation of a geomagnetic storm initial phase is refined. On the basis of statistical analysis an empirical formula relating the interplanetary medium parameters (components of interplanetary magnetic field, proton velocity and concentration) and D st -index during the geomagnetic storm initial phase is proposed

  13. Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters.

    Directory of Open Access Journals (Sweden)

    Kwang-Ming Liu

    Full Text Available The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ' and the vital parameters, maximum age (Tmax, age at maturity (Tm, annual fecundity (f/Rc, size at birth (Lb, size at maturity (Lm, and asymptotic length (L∞. Group (1 included species with slow growth rates (0.034 yr(-1 < k < 0.103 yr(-1 and extended longevity (26 yr < Tmax < 81 yr, e.g., shortfin mako Isurus oxyrinchus, dusky shark Carcharhinus obscurus, etc.; Group (2 included species with fast growth rates (0.103 yr(-1 < k < 0.358 yr(-1 and short longevity (9 yr < Tmax < 26 yr, e.g., starspotted smoothhound Mustelus manazo, gray smoothhound M. californicus, etc.; Group (3 included late maturing species (Lm/L∞ ≧ 0.75 with moderate longevity (Tmax < 29 yr, e.g., pelagic thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ' values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures.

  14. Technical Note: A comparison of model and empirical measures of catchment-scale effective energy and mass transfer

    Directory of Open Access Journals (Sweden)

    C. Rasmussen

    2013-09-01

    Full Text Available Recent work suggests that a coupled effective energy and mass transfer (EEMT term, which includes the energy associated with effective precipitation and primary production, may serve as a robust prediction parameter of critical zone structure and function. However, the models used to estimate EEMT have been solely based on long-term climatological data with little validation using direct empirical measures of energy, water, and carbon balances. Here we compare catchment-scale EEMT estimates generated using two distinct approaches: (1 EEMT modeled using the established methodology based on estimates of monthly effective precipitation and net primary production derived from climatological data, and (2 empirical catchment-scale EEMT estimated using data from 86 catchments of the Model Parameter Estimation Experiment (MOPEX and MOD17A3 annual net primary production (NPP product derived from Moderate Resolution Imaging Spectroradiometer (MODIS. Results indicated positive and significant linear correspondence (R2 = 0.75; P −2 yr−1. Modeled EEMT values were consistently greater than empirical measures of EEMT. Empirical catchment estimates of the energy associated with effective precipitation (EPPT were calculated using a mass balance approach that accounts for water losses to quick surface runoff not accounted for in the climatologically modeled EPPT. Similarly, local controls on primary production such as solar radiation and nutrient limitation were not explicitly included in the climatologically based estimates of energy associated with primary production (EBIO, whereas these were captured in the remotely sensed MODIS NPP data. These differences likely explain the greater estimate of modeled EEMT relative to the empirical measures. There was significant positive correlation between catchment aridity and the fraction of EEMT partitioned into EBIO (FBIO, with an increase in FBIO as a fraction of the total as aridity increases and percentage of

  15. Life Writing After Empire

    DEFF Research Database (Denmark)

    A watershed moment of the twentieth century, the end of empire saw upheavals to global power structures and national identities. However, decolonisation profoundly affected individual subjectivities too. Life Writing After Empire examines how people around the globe have made sense of the post...... in order to understand how individual life writing reflects broader societal changes. From far-flung corners of the former British Empire, people have turned to life writing to manage painful or nostalgic memories, as well as to think about the past and future of the nation anew through the personal...

  16. Empirical models for the estimation of global solar radiation with sunshine hours on horizontal surface in various cities of Pakistan

    International Nuclear Information System (INIS)

    Gadiwala, M.S.; Usman, A.; Akhtar, M.; Jamil, K.

    2013-01-01

    In developing countries like Pakistan the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Only five long-period locations data of solar radiation data is available in Pakistan (Karachi, Quetta, Lahore, Multan and Peshawar). These locations almost encompass the different geographical features of Pakistan. For this reason in this study the Mean monthly global solar radiation has been estimated using empirical models of Angstrom, FAO, Glover Mc-Culloch, Sangeeta & Tiwari for the diversity of approach and use of climatic and geographical parameters. Empirical constants for these models have been estimated and the results obtained by these models have been tested statistically. The results show encouraging agreement between estimated and measured values. The outcome of these empirical models will assist the researchers working on solar energy estimation of the location having similar conditions

  17. Water coning. An empirical formula for the critical oil-production rate

    Energy Technology Data Exchange (ETDEWEB)

    Schols, R S

    1972-01-01

    The production of oil through a well that partly penetrates an oil layer underlain by water causes the oil/water interface to deform into a bell shape, usually referred to as water coning. To prevent water- breakthrough as a result of water coning, a knowledge of critical rates is necessary. Experiments are described in which critical rates were measured as a function of the relevant parameters. The experiments were conducted in Hele Shaw models, suitable for radial flow. From the experimental data, an empirical formula for critical rates was derived in dimensionless form. Approximate theoretical solutions for the critical rate appear in literature. A comparison of critical rates calculated according to these solutions with those from the empirical formula shows that these literature data give either too high or too low values for the critical rates.

  18. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro

    2013-01-01

    An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)

  19. Empirical ethics, context-sensitivity, and contextualism.

    Science.gov (United States)

    Musschenga, Albert W

    2005-10-01

    In medical ethics, business ethics, and some branches of political philosophy (multi-culturalism, issues of just allocation, and equitable distribution) the literature increasingly combines insights from ethics and the social sciences. Some authors in medical ethics even speak of a new phase in the history of ethics, hailing "empirical ethics" as a logical next step in the development of practical ethics after the turn to "applied ethics." The name empirical ethics is ill-chosen because of its associations with "descriptive ethics." Unlike descriptive ethics, however, empirical ethics aims to be both descriptive and normative. The first question on which I focus is what kind of empirical research is used by empirical ethics and for which purposes. I argue that the ultimate aim of all empirical ethics is to improve the context-sensitivity of ethics. The second question is whether empirical ethics is essentially connected with specific positions in meta-ethics. I show that in some kinds of meta-ethical theories, which I categorize as broad contextualist theories, there is an intrinsic need for connecting normative ethics with empirical social research. But context-sensitivity is a goal that can be aimed for from any meta-ethical position.

  20. Consistent constitutive modeling of metallic target penetration using empirical, analytical, and numerical penetration models

    Directory of Open Access Journals (Sweden)

    John (Jack P. Riegel III

    2016-04-01

    Full Text Available Historically, there has been little correlation between the material properties used in (1 empirical formulae, (2 analytical formulations, and (3 numerical models. The various regressions and models may each provide excellent agreement for the depth of penetration into semi-infinite targets. But the input parameters for the empirically based procedures may have little in common with either the analytical model or the numerical model. This paper builds on previous work by Riegel and Anderson (2014 to show how the Effective Flow Stress (EFS strength model, based on empirical data, can be used as the average flow stress in the analytical Walker–Anderson Penetration model (WAPEN (Anderson and Walker, 1991 and how the same value may be utilized as an effective von Mises yield strength in numerical hydrocode simulations to predict the depth of penetration for eroding projectiles at impact velocities in the mechanical response regime of the materials. The method has the benefit of allowing the three techniques (empirical, analytical, and numerical to work in tandem. The empirical method can be used for many shot line calculations, but more advanced analytical or numerical models can be employed when necessary to address specific geometries such as edge effects or layering that are not treated by the simpler methods. Developing complete constitutive relationships for a material can be costly. If the only concern is depth of penetration, such a level of detail may not be required. The effective flow stress can be determined from a small set of depth of penetration experiments in many cases, especially for long penetrators such as the L/D = 10 ones considered here, making it a very practical approach. In the process of performing this effort, the authors considered numerical simulations by other researchers based on the same set of experimental data that the authors used for their empirical and analytical assessment. The goals were to establish a

  1. Empire as a Geopolitical Figure

    DEFF Research Database (Denmark)

    Parker, Noel

    2010-01-01

    This article analyses the ingredients of empire as a pattern of order with geopolitical effects. Noting the imperial form's proclivity for expansion from a critical reading of historical sociology, the article argues that the principal manifestation of earlier geopolitics lay not in the nation...... but in empire. That in turn has been driven by a view of the world as disorderly and open to the ordering will of empires (emanating, at the time of geopolitics' inception, from Europe). One implication is that empires are likely to figure in the geopolitical ordering of the globe at all times, in particular...... after all that has happened in the late twentieth century to undermine nationalism and the national state. Empire is indeed a probable, even for some an attractive form of regime for extending order over the disorder produced by globalisation. Geopolitics articulated in imperial expansion is likely...

  2. Application of Generalized Student’s T-Distribution In Modeling The Distribution of Empirical Return Rates on Selected Stock Exchange Indexes

    Directory of Open Access Journals (Sweden)

    Purczyńskiz Jan

    2014-07-01

    Full Text Available This paper examines the application of the so called generalized Student’s t-distribution in modeling the distribution of empirical return rates on selected Warsaw stock exchange indexes. It deals with distribution parameters by means of the method of logarithmic moments, the maximum likelihood method and the method of moments. Generalized Student’s t-distribution ensures better fitting to empirical data than the classical Student’s t-distribution.

  3. AI-guided parameter optimization in inverse treatment planning

    International Nuclear Information System (INIS)

    Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho

    2003-01-01

    An artificial intelligence (AI)-guided inverse planning system was developed to optimize the combination of parameters in the objective function for intensity-modulated radiation therapy (IMRT). In this system, the empirical knowledge of inverse planning was formulated with fuzzy if-then rules, which then guide the parameter modification based on the on-line calculated dose. Three kinds of parameters (weighting factor, dose specification, and dose prescription) were automatically modified using the fuzzy inference system (FIS). The performance of the AI-guided inverse planning system (AIGIPS) was examined using the simulated and clinical examples. Preliminary results indicate that the expected dose distribution was automatically achieved using the AI-guided inverse planning system, with the complicated compromising between different parameters accomplished by the fuzzy inference technique. The AIGIPS provides a highly promising method to replace the current trial-and-error approach

  4. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  5. Surface Passivation in Empirical Tight Binding

    Science.gov (United States)

    He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann

    2016-03-01

    Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameters. This method is applied to a Si quantum well and a Si ultra-thin body transistor oxidized with SiO2 in several oxidation configurations. Comparison with ab-initio results and experiments verifies the presented method. Oxidation configurations that severely hamper the transistor performance are identified. It is also shown that the commonly used implicit H atom passivation overestimates the transistor performance.

  6. Identifying mechanisms that structure ecological communities by snapping model parameters to empirically observed tradeoffs.

    Science.gov (United States)

    Thomas Clark, Adam; Lehman, Clarence; Tilman, David

    2018-04-01

    Theory predicts that interspecific tradeoffs are primary determinants of coexistence and community composition. Using information from empirically observed tradeoffs to augment the parametrisation of mechanism-based models should therefore improve model predictions, provided that tradeoffs and mechanisms are chosen correctly. We developed and tested such a model for 35 grassland plant species using monoculture measurements of three species characteristics related to nitrogen uptake and retention, which previous experiments indicate as important at our site. Matching classical theoretical expectations, these characteristics defined a distinct tradeoff surface, and models parameterised with these characteristics closely matched observations from experimental multi-species mixtures. Importantly, predictions improved significantly when we incorporated information from tradeoffs by 'snapping' characteristics to the nearest location on the tradeoff surface, suggesting that the tradeoffs and mechanisms we identify are important determinants of local community structure. This 'snapping' method could therefore constitute a broadly applicable test for identifying influential tradeoffs and mechanisms. © 2018 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  7. Theological reflections on empire

    Directory of Open Access Journals (Sweden)

    Allan A. Boesak

    2009-11-01

    Full Text Available Since the meeting of the World Alliance of Reformed Churches in Accra, Ghana (2004, and the adoption of the Accra Declaration, a debate has been raging in the churches about globalisation, socio-economic justice, ecological responsibility, political and cultural domination and globalised war. Central to this debate is the concept of empire and the way the United States is increasingly becoming its embodiment. Is the United States a global empire? This article argues that the United States has indeed become the expression of a modern empire and that this reality has considerable consequences, not just for global economics and politics but for theological refl ection as well.

  8. Current State of History of Psychology Teaching and Education in Argentina: An Empirical Bibliometric Investigation

    Science.gov (United States)

    Fierro, Catriel; Ostrovsky, Ana Elisa; Di Doménico, María Cristina

    2018-01-01

    This study is an empirical analysis of the field's current state in Argentinian universities. Bibliometric parameters were used to retrieve the total listed texts (N = 797) of eight undergraduate history courses' syllabi from Argentina's most populated public university psychology programs. Then, professors in charge of the selected courses (N =…

  9. Statistical microeconomics and commodity prices: theory and empirical results.

    Science.gov (United States)

    Baaquie, Belal E

    2016-01-13

    A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).

  10. DIF Testing with an Empirical-Histogram Approximation of the Latent Density for Each Group

    Science.gov (United States)

    Woods, Carol M.

    2011-01-01

    This research introduces, illustrates, and tests a variation of IRT-LR-DIF, called EH-DIF-2, in which the latent density for each group is estimated simultaneously with the item parameters as an empirical histogram (EH). IRT-LR-DIF is used to evaluate the degree to which items have different measurement properties for one group of people versus…

  11. Accounting of inter-electron correlations in the model of mobile electron shells

    International Nuclear Information System (INIS)

    Panov, Yu.D.; Moskvin, A.S.

    2000-01-01

    One studied the basic peculiar features of the model for mobile electron shells for multielectron atom or cluster. One offered a variation technique to take account of the electron correlations where the coordinates of the centre of single-particle atomic orbital served as variation parameters. It enables to interpret dramatically variation of electron density distribution under anisotropic external effect in terms of the limited initial basis. One studied specific correlated states that might make correlation contribution into the orbital current. Paper presents generalization of the typical MO-LCAO pattern with the limited set of single particle functions enabling to take account of additional multipole-multipole interactions in the cluster [ru

  12. Dithiolato complexes of molybdenum and tungsten

    International Nuclear Information System (INIS)

    Nieuwpoort, A.

    1975-01-01

    The synthesis of eight-coordinated and six-coordinated tungsten and molybdenum complexes with dithioligands is described. Molecular and crystal structures are determined and bond angles, bond lengths and structural parameters tabulated. Infrared spectra of dithiocarbamato complexes are discussed more extensively. Redox reactions are studied by voltammetry and electron transfer properties derived. Properties of the d electrons of the metal ion are interpreted in the ligand field model with data from electronic and e.s.r. spectra and magnetic susceptibilities. The result of molecular orbital calculations with the extended Hueckel-LCAO method are presented for eight-coordinated d 1 and d 2 systems, the six-coordinated complexes, and the free ligands

  13. Organizational Flexibility for Hypercompetitive Markets : Empirical Evidence of the Composition and Context Specificity of Dynamic Capabilities and Organization Design Parameters

    NARCIS (Netherlands)

    N.P. van der Weerdt (Niels)

    2009-01-01

    textabstractThis research project, which builds on the conceptual work of Henk Volberda on the flexible firm, empirically investigates four aspects of organizational flexibility. Our analysis of data of over 1900 firms and over 3000 respondents shows (1) that several increasing levels of

  14. VLBI-derived troposphere parameters during CONT08

    Science.gov (United States)

    Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.

    2011-07-01

    Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two

  15. Stellar atmospheric parameter estimation using Gaussian process regression

    Science.gov (United States)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  16. Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix

    International Nuclear Information System (INIS)

    Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.

    2012-01-01

    An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)

  17. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs

    Directory of Open Access Journals (Sweden)

    Jieliang Shen

    2018-01-01

    Full Text Available The establishment of the Aircraft Dynamic Model (ADM constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF, with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.

  18. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.

    Science.gov (United States)

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-13

    The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.

  19. Research Article Evaluation of different signal propagation models for a mixed indoor-outdoor scenario using empirical data

    Directory of Open Access Journals (Sweden)

    Oleksandr Artemenko

    2016-06-01

    Full Text Available In this paper, we are choosing a suitable indoor-outdoor propagation model out of the existing models by considering path loss and distance as parameters. A path loss is calculated empirically by placing emitter nodes inside a building. A receiver placed outdoors is represented by a Quadrocopter (QC that receives beacon messages from indoor nodes. As per our analysis, the International Telecommunication Union (ITU model, Stanford University Interim (SUI model, COST-231 Hata model, Green-Obaidat model, Free Space model, Log-Distance Path Loss model and Electronic Communication Committee 33 (ECC-33 models are chosen and evaluated using empirical data collected in a real environment. The aim is to determine if the analytically chosen models fit our scenario by estimating the minimal standard deviation from the empirical data.

  20. Birds of the Mongol Empire

    OpenAIRE

    Eugene N. Anderson

    2016-01-01

    The Mongol Empire, the largest contiguous empire the world has ever known, had, among other things, a goodly number of falconers, poultry raisers, birdcatchers, cooks, and other experts on various aspects of birding. We have records of this, largely in the Yinshan Zhengyao, the court nutrition manual of the Mongol empire in China (the Yuan Dynasty). It discusses in some detail 22 bird taxa, from swans to chickens. The Huihui Yaofang, a medical encyclopedia, lists ten taxa used medicinally. Ma...

  1. Wireless and empire geopolitics radio industry and ionosphere in the British Empire 1918-1939

    CERN Document Server

    Anduaga, Aitor

    2009-01-01

    Although the product of consensus politics, the British Empire was based on communications supremacy and the knowledge of the atmosphere. Focusing on science, industry, government, the military, and education, this book studies the relationship between wireless and Empire throughout the interwar period.

  2. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  3. Empirical Music Aesthetics

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    The toolbox for empirically exploring the ways that artistic endeavors convey and activate meaning on the part of performers and audiences continues to expand. Current work employing methods at the intersection of performance studies, philosophy, motion capture and neuroscience to better understand...... musical performance and reception is inspired by traditional approaches within aesthetics, but it also challenges some of the presuppositions inherent in them. As an example of such work I present a research project in empirical music aesthetics begun last year and of which I am a team member....

  4. Empirical philosophy of science

    DEFF Research Database (Denmark)

    Wagenknecht, Susann; Nersessian, Nancy J.; Andersen, Hanne

    2015-01-01

    A growing number of philosophers of science make use of qualitative empirical data, a development that may reconfigure the relations between philosophy and sociology of science and that is reminiscent of efforts to integrate history and philosophy of science. Therefore, the first part...... of this introduction to the volume Empirical Philosophy of Science outlines the history of relations between philosophy and sociology of science on the one hand, and philosophy and history of science on the other. The second part of this introduction offers an overview of the papers in the volume, each of which...... is giving its own answer to questions such as: Why does the use of qualitative empirical methods benefit philosophical accounts of science? And how should these methods be used by the philosopher?...

  5. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    Science.gov (United States)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  6. Dielectric response of molecules in empirical tight-binding theory

    Science.gov (United States)

    Boykin, Timothy B.; Vogl, P.

    2002-01-01

    In this paper we generalize our previous approach to electromagnetic interactions within empirical tight-binding theory to encompass molecular solids and isolated molecules. In order to guarantee physically meaningful results, we rederive the expressions for relevant observables using commutation relations appropriate to the finite tight-binding Hilbert space. In carrying out this generalization, we examine in detail the consequences of various prescriptions for the position and momentum operators in tight binding. We show that attempting to fit parameters of the momentum matrix directly generally results in a momentum operator which is incompatible with the underlying tight-binding model, while adding extra position parameters results in numerous difficulties, including the loss of gauge invariance. We have applied our scheme, which we term the Peierls-coupling tight-binding method, to the optical dielectric function of the molecular solid PPP, showing that this approach successfully predicts its known optical properties even in the limit of isolated molecules.

  7. Empirical complexities in the genetic foundations of lethal mutagenesis.

    Science.gov (United States)

    Bull, James J; Joyce, Paul; Gladstone, Eric; Molineux, Ian J

    2013-10-01

    From population genetics theory, elevating the mutation rate of a large population should progressively reduce average fitness. If the fitness decline is large enough, the population will go extinct in a process known as lethal mutagenesis. Lethal mutagenesis has been endorsed in the virology literature as a promising approach to viral treatment, and several in vitro studies have forced viral extinction with high doses of mutagenic drugs. Yet only one empirical study has tested the genetic models underlying lethal mutagenesis, and the theory failed on even a qualitative level. Here we provide a new level of analysis of lethal mutagenesis by developing and evaluating models specifically tailored to empirical systems that may be used to test the theory. We first quantify a bias in the estimation of a critical parameter and consider whether that bias underlies the previously observed lack of concordance between theory and experiment. We then consider a seemingly ideal protocol that avoids this bias-mutagenesis of virions-but find that it is hampered by other problems. Finally, results that reveal difficulties in the mere interpretation of mutations assayed from double-strand genomes are derived. Our analyses expose unanticipated complexities in testing the theory. Nevertheless, the previous failure of the theory to predict experimental outcomes appears to reside in evolutionary mechanisms neglected by the theory (e.g., beneficial mutations) rather than from a mismatch between the empirical setup and model assumptions. This interpretation raises the specter that naive attempts at lethal mutagenesis may augment adaptation rather than retard it.

  8. Land of Addicts? An Empirical Investigation of Habit-Based Asset Pricing Behavior

    OpenAIRE

    Xiaohong Chen; Sydney C. Ludvigson

    2004-01-01

    This paper studies the ability of a general class of habit-based asset pricing models to match the conditional moment restrictions implied by asset pricing theory. We treat the functional form of the habit as unknown, and to estimate it along with the rest of the model's finite dimensional parameters. Using quarterly data on consumption growth, assets returns and instruments, our empirical results indicate that the estimated habit function is nonlinear, the habit formation is better described...

  9. Final Empirical Test Case Specification

    DEFF Research Database (Denmark)

    Kalyanova, Olena; Heiselberg, Per

    This document includes the empirical specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one....

  10. Bayes Empirical Bayes Inference of Amino Acid Sites Under Positive Selection

    DEFF Research Database (Denmark)

    Yang, Ziheng; Wong, Wendy Shuk Wan; Nielsen, Rasmus

    2005-01-01

    , with > 1 indicating positive selection. Statistical distributions are used to model the variation in among sites, allowing a subset of sites to have > 1 while the rest of the sequence may be under purifying selection with ... probabilities that a site comes from the site class with > 1. Current implementations, however, use the naive EB (NEB) approach and fail to account for sampling errors in maximum likelihood estimates of model parameters, such as the proportions and ratios for the site classes. In small data sets lacking...... information, this approach may lead to unreliable posterior probability calculations. In this paper, we develop a Bayes empirical Bayes (BEB) approach to the problem, which assigns a prior to the model parameters and integrates over their uncertainties. We compare the new and old methods on real and simulated...

  11. Empire vs. Federation

    DEFF Research Database (Denmark)

    Gravier, Magali

    2011-01-01

    The article discusses the concepts of federation and empire in the context of the European Union (EU). Even if these two concepts are not usually contrasted to one another, the article shows that they refer to related type of polities. Furthermore, they can be used at a time because they shed light...... on different and complementary aspects of the European integration process. The article concludes that the EU is at the crossroads between federation and empire and may remain an ‘imperial federation’ for several decades. This could mean that the EU is on the verge of transforming itself to another type...

  12. An empirical relationship for homogenization in single-phase binary alloy systems

    Science.gov (United States)

    Unnam, J.; Tenney, D. R.; Stein, B. A.

    1979-01-01

    A semiempirical formula is developed for describing the extent of interaction between constituents in single-phase binary alloy systems with planar, cylindrical, or spherical interfaces. The formula contains two parameters that are functions of mean concentration and interface geometry of the couple. The empirical solution is simple, easy to use, and does not involve sequential calculations, thereby allowing quick estimation of the extent of interactions without lengthy calculations. Results obtained with this formula are in good agreement with those from a finite-difference analysis.

  13. Development of empirical relationships for prediction of mechanical and wear properties of AA6082 aluminum matrix composites produced using friction stir processing

    Directory of Open Access Journals (Sweden)

    I. Dinaharan

    2016-09-01

    Full Text Available Friction Stir Processing (FSP has been established as a potential solid state production method to prepare aluminum matrix composites (AMCs. FSP was effectively applied to produce AA6082 AMCs reinforced with various ceramic particles such as SiC, Al2O3, TiC, B4C and WC in this work. Empirical relationships were estimated to predict the influence of FSP process parameters on the properties such as area of stir zone, microhardness and wear rate of AMCs. FSP experiments were executed using a central composite rotatable design consisting of four factors and five levels. The FSP parameters analyzed were tool rotational speed, traverse speed, groove width and type of ceramic particle. The effect of those parameters on the properties of AMCs was deduced using the developed empirical relationships. The predicted trends were explained with the aid of observed macro and microstructures.

  14. Sensitivity of viscosity Arrhenius parameters to polarity of liquids

    Science.gov (United States)

    Kacem, R. B. H.; Alzamel, N. O.; Ouerfelli, N.

    2017-09-01

    Several empirical and semi-empirical equations have been proposed in the literature to estimate the liquid viscosity upon temperature. In this context, this paper aims to study the effect of polarity of liquids on the modeling of the viscosity-temperature dependence, considering particularly the Arrhenius type equations. To achieve this purpose, the solvents are classified into three groups: nonpolar, borderline polar and polar solvents. Based on adequate statistical tests, we found that there is strong evidence that the polarity of solvents affects significantly the distribution of the Arrhenius-type equation parameters and consequently the modeling of the viscosity-temperature dependence. Thus, specific estimated values of parameters for each group of liquids are proposed in this paper. In addition, the comparison of the accuracy of approximation with and without classification of liquids, using the Wilcoxon signed-rank test, shows a significant discrepancy of the borderline polar solvents. For that, we suggested in this paper new specific coefficient values of the simplified Arrhenius-type equation for better estimation accuracy. This result is important given that the accuracy in the estimation of the viscosity-temperature dependence may affect considerably the design and the optimization of several industrial processes.

  15. The Apocalyptic Empire of America L’Empire apocalyptique américain

    Directory of Open Access Journals (Sweden)

    Akça Ataç

    2009-10-01

    Full Text Available En général, les études traitant de « l’Empire » américain tendent à chercher à comprendre celui-ci à partir de termes concrets tels que la frontière, l’intervention militaire, le commerce international. Néanmoins, les Empires sont d’abord le résultat de profondes traditions intellectuelles intangibles qui encouragent et justifient les actions entreprises dans le cadre de politiques impériales. Dans le cas de l’Amérique, les fondements intellectuels du nouvel idéal impérial sont ancrés dans la vision apocalyptique transportée dans les bagages des premiers colons puritains. Si l’on ne prend pas en compte cet ancrage apocalyptique, on ne peut saisir, dans leur totalité, les principes fondamentaux de « l’Empire » américain. On devrait examiner des termes qui entrent en résonance avec le discours impérial tels que « mission » et « destinée » ainsi que de l’engagement explicite dans la rhétorique présidentielle en faveur de « l’amélioration » du monde à n’importe quel prix du point de vue de cette croyance apocalyptique éternelle. Cet article essaie d’élucider l’origine et l’essence de la vision apocalyptique américaine en portant une attention particulière sur son influence dans la genèse du concept d’Empire américain.

  16. Estimation of the value-at-risk parameter: Econometric analysis and the extreme value theory approach

    Directory of Open Access Journals (Sweden)

    Mladenović Zorica

    2006-01-01

    Full Text Available In this paper different aspects of value-at-risk estimation are considered. Daily returns of CISCO, INTEL and NASDAQ stock indices are analyzed for period: September 1996 - September 2006. Methods that incorporate time varying variability and heavy tails of the empirical distributions of returns are implemented. The main finding of the paper is that standard econometric methods underestimate the value-at-risk parameter if heavy tails of the empirical distribution are not explicitly taken into account. .

  17. U-tube steam generator empirical model development and validation using neural networks

    International Nuclear Information System (INIS)

    Parlos, A.G.; Chong, K.T.; Atiya, A.

    1992-01-01

    Empirical modeling techniques that use model structures motivated from neural networks research have proven effective in identifying complex process dynamics. A recurrent multilayer perception (RMLP) network was developed as a nonlinear state-space model structure along with a static learning algorithm for estimating the parameter associated with it. The methods developed were demonstrated by identifying two submodels of a U-tube steam generator (UTSG), each valid around an operating power level. A significant drawback of this approach is the long off-line training times required for the development of even a simplified model of a UTSG. Subsequently, a dynamic gradient descent-based learning algorithm was developed as an accelerated alternative to train an RMLP network for use in empirical modeling of power plants. The two main advantages of this learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm were demonstrated via the case study of a simple steam boiler power plant. In this paper, the dynamic gradient descent-based learning algorithm is used for the development and validation of a complete UTSG empirical model

  18. Rock models at Zielona Gora, Poland applied to the semi-empirical neutron tool calibration

    International Nuclear Information System (INIS)

    Czubek, J.A.; Ossowski, A.; Zorski, T.; Massalski, T.

    1995-01-01

    The semi-empirical calibration method applied to the neutron porosity tool is presented in this paper. It was used with the ODSN-102 tool of 70 mm diameter and equipped with an Am-Be neutron source at the calibration facility of Zielona Gora, Poland, inside natural and artificial rocks: four sandstone, four limestone and one dolomite block with borehole diameters of 143 and 216 mm, and three artificial ceramic blocks with borehole diameters of 90 and 180 mm. All blocks were saturated with fresh water, and fresh water was also inside all boreholes. In five blocks mineralized water (200,000 ppm NaCl) was introduced inside the boreholes. All neutron characteristics of the calibration blocks are given in this paper. The semi-empirical method of calibration correlates the tool readings observed experimentally with the general neutron parameter (GNP). This results in a general calibration curve, where the tool readings (TR) vs GNP are situated at one curve irrespective of their origin, i.e. of the formation lithology, borehole diameter, tool stand-off, brine salinity, etc. The n and m power coefficients are obtained experimentally during the calibration procedure. The apparent neutron parameters are defined as those sensed by a neutron tool situated inside the borehole and in real environmental conditions. When they are known, the GNP parameter can be computed analytically for the whole range of porosity at any kind of borehole diameter, formation lithology (including variable rock matrix absorption cross-section and density), borehole and formation salinity, tool stand-off and drilling fluid physical parameters. By this approach all porosity corrections with respect to the standard (e.g. limestone) calibration curve can be generated. (author)

  19. Review essay: empires, ancient and modern.

    Science.gov (United States)

    Hall, John A

    2011-09-01

    This essay drews attention to two books on empires by historians which deserve the attention of sociologists. Bang's model of the workings of the Roman economy powerfully demonstrates the tributary nature of per-industrial tributary empires. Darwin's analysis concentrates on modern overseas empires, wholly different in character as they involved the transportation of consumption items for the many rather than luxury goods for the few. Darwin is especially good at describing the conditions of existence of late nineteenth century empires, noting that their demise was caused most of all by the failure of balance of power politics in Europe. Concluding thoughts are offered about the USA. © London School of Economics and Political Science 2011.

  20. Plasma parameter estimations for the Large Helical Device based on the gyro-reduced Bohm scaling

    International Nuclear Information System (INIS)

    Okamoto, Masao; Nakajima, Noriyoshi; Sugama, Hideo.

    1991-10-01

    A model of gyro-reduced Bohm scaling law is incorporated into a one-dimensional transport code to predict plasma parameters for the Large Helical Device (LHD). The transport code calculations reproduce well the LHD empirical scaling law and basic parameters and profiles of the LHD plasma are calculated. The amounts of toroidal currents (bootstrap current and beam-driven current) are also estimated. (author)

  1. PROBLEMS WITH WIREDU'S EMPIRICALISM Martin Odei Ajei1 ...

    African Journals Online (AJOL)

    In his “Empiricalism: The Empirical Character of an African Philosophy”,. Kwasi Wiredu sets out ... others, that an empirical metaphysical system contains both empirical ..... realms which multiple categories of existents inhabit and conduct their being in .... to a mode of reasoning that conceives categories polarized by formal.

  2. Systematics of spallation yields with a four-parameter formula

    International Nuclear Information System (INIS)

    Foshina, M.; Martins, J.B.; Tavares, O.A.P.; Di Napoli, V.

    1982-01-01

    A semi-empirical four-parameter formula is proposed in order to systematize intermediate- and high-energy proton-induced spallation yields of target nuclei covering the 50-100 mass number interval. The measured yields are reproduced by the formula with a degree of accuracy which is comparable with or better than those obtained in previous proton-spallation systematics. The formula predicts reliable values for the most probable mass number of isotopic distributions. For a number of irradiation conditions which may be encountered in practical and physical applications, estimates of proton spallation yields can be obtained by the proposed four-parameter formula with no need of high-speed machines. (M.A.F.) [pt

  3. Determination of beam characteristic parameters for a linear accelerator

    International Nuclear Information System (INIS)

    Lima, D.A. de.

    1978-01-01

    A mechanism to determine electron beam characteristic parameters of a linear accelerator was constructed. The mechanism consists in an electro-calorimeter and an accurate optical densitometer. The following parameters: mean power, mean current, mean energy/particle, pulse Width, pulse amplitude dispersion, and pulse frequency, operating the 2 MeV linear accelerator of CBPF (Brazilian Center pf Physics Researches). The optical isodensity curves of irradiated glass lamellae were obtained, providing information about focus degradation penetration direction in material and the reach of particle. The point to point dose distribution in the material from optical density curves were obtained, using a semi empirical and approached model. (M.C.K.) [pt

  4. Activation method for measurement of neutron spectrum parameters

    International Nuclear Information System (INIS)

    Efimov, B.V.; Demidov, A.M.; Ionov, V.S.; Konjaev, S.I.; Marin, S.V.; Bryzgalov, V.I.

    2007-01-01

    Experimental researches of spectrum parameters of neutrons at nuclear installations RRC KI are submitted. The installations have different designs of the cores, reflector, parameters and types of fuel elements. Measurements were carried out with use of the technique developed in RRC KI for irradiation resonance detectors UKD. The arrangement of detectors in the cores ensured possibility of measurement of neutron spectra with distinguished values of parameters. The spectrum parameters which are introduced by parametrical representation of a neutrons spectrum in the form corresponding to formalism Westcott. On experimental data were determinate absolute values of density neutron flux (DNF) in thermal and epithermal area of a spectrum (F t , f epi ), empirical dependence of temperature of neutron gas (Tn) on parameter of a rigidity of a spectrum (z), density neutron flux in transitional energy area of the spectrum. Dependences of spectral indexes of nuclides (UDy/UX), included in UKD, from a rigidity z and-or temperatures of neutron gas Tn are obtained.B Tools of mathematical processing of results are used for activation data and estimation of parameters of a spectrum (F t , f epi , z, Tn, UDy/UX). In the paper are presented some results of researches of neutron spectrum parameters of the nuclear installations (Authors)

  5. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    Science.gov (United States)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  6. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  7. The Role of Empirical Research in Bioethics

    Science.gov (United States)

    Kon, Alexander A.

    2010-01-01

    There has long been tension between bioethicists whose work focuses on classical philosophical inquiry and those who perform empirical studies on bioethical issues. While many have argued that empirical research merely illuminates current practices and cannot inform normative ethics, others assert that research-based work has significant implications for refining our ethical norms. In this essay, I present a novel construct for classifying empirical research in bioethics into four hierarchical categories: Lay of the Land, Ideal Versus Reality, Improving Care, and Changing Ethical Norms. Through explaining these four categories and providing examples of publications in each stratum, I define how empirical research informs normative ethics. I conclude by demonstrating how philosophical inquiry and empirical research can work cooperatively to further normative ethics. PMID:19998120

  8. An empirical model to predict infield thin layer drying rate of cut switchgrass

    International Nuclear Information System (INIS)

    Khanchi, A.; Jones, C.L.; Sharma, B.; Huhnke, R.L.; Weckler, P.; Maness, N.O.

    2013-01-01

    A series of 62 thin layer drying experiments were conducted to evaluate the effect of solar radiation, vapor pressure deficit and wind speed on drying rate of switchgrass. An environmental chamber was fabricated that can simulate field drying conditions. An empirical drying model based on maturity stage of switchgrass was also developed during the study. It was observed that solar radiation was the most significant factor in improving the drying rate of switchgrass at seed shattering and seed shattered maturity stage. Therefore, drying switchgrass in wide swath to intercept the maximum amount of radiation at these stages of maturity is recommended. Moreover, it was observed that under low radiation intensity conditions, wind speed helps to improve the drying rate of switchgrass. Field operations such as raking or turning of the windrows are recommended to improve air circulation within a swath on cloudy days. Additionally, it was found that the effect of individual weather parameters on the drying rate of switchgrass was dependent on maturity stage. Vapor pressure deficit was strongly correlated with the drying rate during seed development stage whereas, vapor pressure deficit was weakly correlated during seed shattering and seed shattered stage. These findings suggest the importance of using separate drying rate models for each maturity stage of switchgrass. The empirical models developed in this study can predict the drying time of switchgrass based on the forecasted weather conditions so that the appropriate decisions can be made. -- Highlights: • An environmental chamber was developed in the present study to simulate field drying conditions. • An empirical model was developed that can estimate drying rate of switchgrass based on forecasted weather conditions. • Separate equations were developed based on maturity stage of switchgrass. • Designed environmental chamber can be used to evaluate the effect of other parameters that affect drying of crops

  9. Empirical Percentile Growth Curves with Z-scores Considering Seasonal Compensatory Growths for Japanese Thoroughbred Horses

    Science.gov (United States)

    ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro

    2013-01-01

    Percentile growth curves are often used as a clinical indicator to evaluate variations of children’s growth status. In this study, we propose empirical percentile growth curves using Z-scores adapted for Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Individual horses and residual effects were included as random effects in the growth curve equation model and their variance components were estimated. Based on the Z-scores of the estimated variance components, empirical percentile growth curves were constructed. A total of 5,594 and 5,680 body weight and age measurements of male and female Thoroughbreds, respectively, and 3,770 withers height and age measurements were used in the analyses. The developed empirical percentile growth curves using Z-scores are computationally feasible and useful for monitoring individual growth parameters of body weight and withers height of young Thoroughbred horses, especially during compensatory growth periods. PMID:24834004

  10. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    Science.gov (United States)

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  11. Medium-resolution Isaac Newton Telescope library of empirical spectra - II. The stellar atmospheric parameters

    NARCIS (Netherlands)

    Cenarro, A. J.; Peletier, R. F.; Sanchez-Blazquez, P.; Selam, S. O.; Toloba, E.; Cardiel, N.; Falcon-Barroso, J.; Gorgas, J.; Jimenez-Vicente, J.; Vazdekis, A.

    2007-01-01

    We present a homogeneous set of stellar atmospheric parameters (T-eff, log g, [Fe/H]) for MILES, a new spectral stellar library covering the range lambda lambda 3525-7500 angstrom at 2.3 angstrom (FWHM) spectral resolution. The library consists of 985 stars spanning a large range in atmospheric

  12. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    Science.gov (United States)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  13. Birds of the Mongol Empire

    Directory of Open Access Journals (Sweden)

    Eugene N. Anderson

    2016-09-01

    Full Text Available The Mongol Empire, the largest contiguous empire the world has ever known, had, among other things, a goodly number of falconers, poultry raisers, birdcatchers, cooks, and other experts on various aspects of birding. We have records of this, largely in the Yinshan Zhengyao, the court nutrition manual of the Mongol empire in China (the Yuan Dynasty. It discusses in some detail 22 bird taxa, from swans to chickens. The Huihui Yaofang, a medical encyclopedia, lists ten taxa used medicinally. Marco Polo also made notes on Mongol bird use. There are a few other records. This allows us to draw conclusions about Mongol ornithology, which apparently was sophisticated and detailed.

  14. Forecast of the key parameters of the 24-th solar cycle

    International Nuclear Information System (INIS)

    Chumak, Oleg Vasilievich; Matveychuk, Tatiana Viktorovna

    2010-01-01

    To predict the key parameters of the solar cycle, a new method is proposed based on the empirical law describing the correlation between the maximum height of the preceding solar cycle and the entropy of the forthcoming one. The entropy of the forthcoming cycle may be estimated using this empirical law, if the maximum height of the current cycle is known. The cycle entropy is shown to correlate well with the cycle's maximum height and, as a consequence, the height of the forthcoming maximum can be estimated. In turn, the correlation found between the height of the maximum and the duration of the ascending branch (the Waldmeier rule) allows the epoch of the maximum, Tmax, to be estimated, if the date of the minimum is known. Moreover, using the law discovered, one can find out the analogous cycles which are similar to the cycle being forecasted, and hence, obtain the synoptic forecast of all main features of the forthcoming cycle. The estimates have shown the accuracy level of this technique to be 86%. The new regularities discovered are also interesting because they are fundamental in the theory of solar cycles and may provide new empirical data. The main parameters of the future solar cycle 24 are as follows: the height of the maximum is Wmax = 95 ± 20, the duration of the ascending branch is Ta = 4.5 ± 0.5yr, the total cycle duration according to the synoptic forecast is 11.3 yr. (research papers)

  15. Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes

    Science.gov (United States)

    Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.

    2018-03-01

    A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.

  16. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  17. Evaluation of selected parameters on exposure rates in Westinghouse designed nuclear power plants

    International Nuclear Information System (INIS)

    Bergmann, C.A.

    1989-01-01

    During the past ten years, Westinghouse under EPRI contract and independently, has performed research and evaluation of plant data to define the trends of ex-core component exposure rates and the effects of various parameters on the exposure rates. The effects of the parameters were evaluated using comparative analyses or empirical techniques. This paper updates the information presented at the Fourth Bournemouth Conference and the conclusions obtained from the effects of selected parameters namely, coolant chemistry, physical changes, use of enriched boric acid, and cobalt input on plant exposure rates. The trends of exposure rates and relationship to doses is also presented. (author)

  18. Prediction of compressibility parameters of the soils using artificial neural network.

    Science.gov (United States)

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  19. Matrix effect studies with empirical formulations in maize saplings

    International Nuclear Information System (INIS)

    Bansal, Meenakshi; Deep, Kanan; Mittal, Raj

    2012-01-01

    In X-ray fluorescence, the earlier derived matrix effects from fundamental relations of intensities of analyte/matrix elements with basic atomic and experimental setup parameters and tested on synthetic known samples were found empirically related to analyte/matrix elemental amounts. The present study involves the application of these relations on potassium and calcium macronutrients of maize saplings treated with different fertilizers. The novelty of work involves a determination of an element in the presence of its secondary excitation rather than avoiding the secondary fluorescence. Therefore, the possible utility of this process is in studying the absorption for some intermediate samples in a lot of a category of samples with close Z interfering constituents (just like Ca and K). Once the absorption and enhancement terms are fitted to elemental amounts and fitted coefficients are determined, with the absorption terms from the fit and an enhancer element amount known from its selective excitation, the next iterative elemental amount can be directly evaluated from the relations. - Highlights: ► Empirical formulation for matrix corrections in terms of amounts of analyte and matrix element. ► The study applied on K and Ca nutrients of maize, rice and potato organic materials. ► The formulation provides matrix terms from amounts of analyte/matrix elements and vice versa.

  20. A LCAO-LDF study of Brønsted acids chemisorption on ZnO(0001)

    Science.gov (United States)

    Casarin, Maurizio; Maccato, Chiara; Tabacchi, Gloria; Vittadini, Andrea

    1996-05-01

    The local density functional theory coupled to the molecular cluster approach has been used to study the chemisorption of Br∅nsted acids (H 2O, H 2S, HCN, CH 3OH and CH 3SH) on the ZnO(0001) polar surface. Geometrical parameters and vibrational frequencies for selected species molecularly and dissociatively chemisorbed have been computed. The agreement with literature experimental data, when available, has been found to be good. The nature of the interaction between the conjugate base of the examined Br∅nsted acids and the Lewis acid site available on the surface has been elucidated, confirming its leading role in determining the actual relative acidity scale obtained by titration displacement reactions. The strength of this interaction follows the order OH - ≈ CN - > CH 3O - > SH - > CH 3S -.

  1. Stellar Parameters for Trappist-1

    Science.gov (United States)

    Van Grootel, Valérie; Fernandes, Catarina S.; Gillon, Michael; Jehin, Emmanuel; Manfroid, Jean; Scuflaire, Richard; Burgasser, Adam J.; Barkaoui, Khalid; Benkhaldoun, Zouhair; Burdanov, Artem; Delrez, Laetitia; Demory, Brice-Olivier; de Wit, Julien; Queloz, Didier; Triaud, Amaury H. M. J.

    2018-01-01

    TRAPPIST-1 is an ultracool dwarf star transited by seven Earth-sized planets, for which thorough characterization of atmospheric properties, surface conditions encompassing habitability, and internal compositions is possible with current and next-generation telescopes. Accurate modeling of the star is essential to achieve this goal. We aim to obtain updated stellar parameters for TRAPPIST-1 based on new measurements and evolutionary models, compared to those used in discovery studies. We present a new measurement for the parallax of TRAPPIST-1, 82.4 ± 0.8 mas, based on 188 epochs of observations with the TRAPPIST and Liverpool Telescopes from 2013 to 2016. This revised parallax yields an updated luminosity of {L}* =(5.22+/- 0.19)× {10}-4 {L}ȯ , which is very close to the previous estimate but almost two times more precise. We next present an updated estimate for TRAPPIST-1 stellar mass, based on two approaches: mass from stellar evolution modeling, and empirical mass derived from dynamical masses of equivalently classified ultracool dwarfs in astrometric binaries. We combine them using a Monte-Carlo approach to derive a semi-empirical estimate for the mass of TRAPPIST-1. We also derive estimate for the radius by combining this mass with stellar density inferred from transits, as well as an estimate for the effective temperature from our revised luminosity and radius. Our final results are {M}* =0.089+/- 0.006 {M}ȯ , {R}* =0.121+/- 0.003 {R}ȯ , and {T}{eff} = 2516 ± 41 K. Considering the degree to which the TRAPPIST-1 system will be scrutinized in coming years, these revised and more precise stellar parameters should be considered when assessing the properties of TRAPPIST-1 planets.

  2. Empirical Ground Motion Characterization of Induced Seismicity in Alberta and Oklahoma

    Science.gov (United States)

    Novakovic, M.; Atkinson, G. M.; Assatourians, K.

    2017-12-01

    We develop empirical ground-motion prediction equations (GMPEs) for ground motions from induced earthquakes in Alberta and Oklahoma following the stochastic-model-based method of Atkinson et al. (2015 BSSA). The Oklahoma ground-motion database is compiled from over 13,000 small to moderate seismic events (M 1 to 5.8) recorded at 1600 seismic stations, at distances from 1 to 750 km. The Alberta database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at 50 regional stations, at distances from 30 to 500 km. A generalized inversion is used to solve for regional source, attenuation and site parameters. The obtained parameters describe the regional attenuation, stress parameter and site amplification. Resolving these parameters allows for the derivation of regionally-calibrated GMPEs that can be used to compare ground motion observations between waste water injection (Oklahoma) and hydraulic fracture induced events (Alberta), and further compare induced observations with ground motions resulting from natural sources (California, NGAWest2). The derived GMPEs have applications for the evaluation of hazards from induced seismicity and can be used to track amplitudes across the regions in real time, which is useful for ground-motion-based alerting systems and traffic light protocols.

  3. Atomic mass prediction from the mass formula with empirical shell terms

    International Nuclear Information System (INIS)

    Uno, Masahiro; Yamada, Masami

    1982-08-01

    The mass-excess prediction of about 8000 nuclides was calculated from two types of the atomic mass formulas with empirical shell terms of Uno and Yamada. The theoretical errors to accompany the calculated mass excess are also presented. These errors have been obtained by a new statistical method. The mass-excess prediction includes the term of the gross feature of a nuclear mass surface, the shell terms and a small correction term for odd-odd nuclei. Two functional forms for the shell terms were used. The first is the constant form, and the sencond is the linear form. In determining the values of shell parameters, only the data of even-even and odd-A nuclei were used. A new statistical method was applied, in which the error inherent to the mass formula was taken account. The obtained shell parameters and the values of mass excess are shown in tables. (Kato, T.)

  4. An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate

    Science.gov (United States)

    Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud

    2018-03-01

    In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.

  5. A reactive empirical bond order (REBO) potential for hydrocarbon-oxygen interactions

    International Nuclear Information System (INIS)

    Ni, Boris; Lee, Ki-Ho; Sinnott, Susan B

    2004-01-01

    The expansion of the second-generation reactive empirical bond order (REBO) potential for hydrocarbons, as parametrized by Brenner and co-workers, to include oxygen is presented. This involves the explicit inclusion of C-O, H-O, and O-O interactions to the existing C-C, C-H, and H-H interactions in the REBO potential. The details of the expansion, including all parameters, are given. The new, expanded potential is then applied to the study of the structure and chemical stability of several molecules and polymer chains, and to modelling chemical reactions among a series of molecules, within classical molecular dynamics simulations

  6. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    Directory of Open Access Journals (Sweden)

    Yoon Soo ePark

    2016-02-01

    Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.

  7. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    Science.gov (United States)

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  8. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  9. Pluvials, Droughts, Energetics, and the Mongol Empire

    Science.gov (United States)

    Hessl, A. E.; Pederson, N.; Baatarbileg, N.

    2012-12-01

    The success of the Mongol Empire, the largest contiguous land empire the world has ever known, is a historical enigma. At its peak in the late 13th century, the empire influenced areas from the Hungary to southern Asia and Persia. Powered by domesticated herbivores, the Mongol Empire grew at the expense of agriculturalists in Eastern Europe, Persia, and China. What environmental factors contributed to the rise of the Mongols? What factors influenced the disintegration of the empire by 1300 CE? Until now, little high resolution environmental data have been available to address these questions. We use tree-ring records of past temperature and water to illuminate the role of energy and water in the evolution of the Mongol Empire. The study of energetics has long been applied to biological and ecological systems but has only recently become a theme in understanding modern coupled natural and human systems (CNH). Because water and energy are tightly linked in human and natural systems, studying their synergies and interactions make it possible to integrate knowledge across disciplines and human history, yielding important lessons for modern societies. We focus on the role of energy and water in the trajectory of an empire, including its rise, development, and demise. Our research is focused on the Orkhon Valley, seat of the Mongol Empire, where recent paleoenvironmental and archeological discoveries allow high resolution reconstructions of past human and environmental conditions for the first time. Our preliminary records indicate that the period 1210-1230 CE, the height of Chinggis Khan's reign is one of the longest and most consistent pluvials in our tree ring reconstruction of interannual drought. Reconstructed temperature derived from five millennium-long records from subalpine forests in Mongolia document warm temperatures beginning in the early 1200's and ending with a plunge into cold temperatures in 1260. Abrupt cooling in central Mongolia at this time is

  10. Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilcox, Ian Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sandoval, Andrew J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reza, Shahed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction and portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.

  11. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  12. Integrated empirical ethics: loss of normativity?

    Science.gov (United States)

    van der Scheer, Lieke; Widdershoven, Guy

    2004-01-01

    An important discussion in contemporary ethics concerns the relevance of empirical research for ethics. Specifically, two crucial questions pertain, respectively, to the possibility of inferring normative statements from descriptive statements, and to the danger of a loss of normativity if normative statements should be based on empirical research. Here we take part in the debate and defend integrated empirical ethical research: research in which normative guidelines are established on the basis of empirical research and in which the guidelines are empirically evaluated by focusing on observable consequences. We argue that in our concrete example normative statements are not derived from descriptive statements, but are developed within a process of reflection and dialogue that goes on within a specific praxis. Moreover, we show that the distinction in experience between the desirable and the undesirable precludes relativism. The normative guidelines so developed are both critical and normative: they help in choosing the right action and in evaluating that action. Finally, following Aristotle, we plead for a return to the view that morality and ethics are inherently related to one another, and for an acknowledgment of the fact that moral judgments have their origin in experience which is always related to historical and cultural circumstances.

  13. Empirical agent-based modelling challenges and solutions

    CERN Document Server

    Barreteau, Olivier

    2014-01-01

    This instructional book showcases techniques to parameterise human agents in empirical agent-based models (ABM). In doing so, it provides a timely overview of key ABM methodologies and the most innovative approaches through a variety of empirical applications.  It features cutting-edge research from leading academics and practitioners, and will provide a guide for characterising and parameterising human agents in empirical ABM.  In order to facilitate learning, this text shares the valuable experiences of other modellers in particular modelling situations. Very little has been published in the area of empirical ABM, and this contributed volume will appeal to graduate-level students and researchers studying simulation modeling in economics, sociology, ecology, and trans-disciplinary studies, such as topics related to sustainability. In a similar vein to the instruction found in a cookbook, this text provides the empirical modeller with a set of 'recipes'  ready to be implemented. Agent-based modeling (AB...

  14. Empirical Legality and Effective Reality

    Directory of Open Access Journals (Sweden)

    Hernán Pringe

    2015-08-01

    Full Text Available The conditions that Kant’s doctrine establishes are examined for the predication of the effective reality of certain empirical objects. It is maintained that a for such a predication, it is necessary to have not only perception but also a certain homogeneity of sensible data, and b the knowledge of the existence of certain empirical objects depends on the application of regulative principles of experience.

  15. Empire-3.2 Malta. Modular System for Nuclear Reaction Calculations and Nuclear Data Evaluation. User's Manual

    International Nuclear Information System (INIS)

    Herman, M.; Capote, R.; Sin, M.

    2013-08-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. The system can be used for theoretical investigations of nuclear reactions as well as for nuclear data evaluation work. Photons, nucleons, deuterons, tritons, helions ( 3 He), α's, and light or heavy ions can be selected as projectiles. The energy range starts just above the resonance region in the case of a neutron projectile, and extends up to few hundred MeV for heavy ion induced reactions. The code accounts for the major nuclear reaction models, such as optical model, Coupled Channels and DWBA (ECIS06 and OPTMAN), Multi-step Direct (ORION + TRISTAN), NVWY Multi-step Compound, exciton model (PCROSS), hybrid Monte Carlo simulation (DDHMS), and the full featured Hauser-Feshbach model including width fluctuations and the optical model for fission. Heavy ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters based on the RIPL-3 library covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, and γ-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations (BARFIT, MOMFIT). The results can be converted into the ENDF-6 format using the accompanying EMPEND code. Modules of the ENDF Utility Codes and the ENDF Pre-Processing codes are applied for ENDF file verification. The package contains the full EXFOR library of experimental data in computational format C4 that are automatically retrieved during the calculations. EMPIRE contains the resonance module that retrieves data from the electronic version of the Atlas of Neutron Resonances by Mughabghab (not provided with the EMPIRE distribution), to produce resonance section and related covariances for the

  16. Derivation of a semi-empirical formula for the quantum efficiency of forward secondary electron emission from γ-irradiated metals. 2

    International Nuclear Information System (INIS)

    Nakamura, Masamoto; Katoh, Yoh

    1994-01-01

    An empirical formula for the quantum efficiency of electrons irradiated with 60 Co γ-rays was reported in a previous paper, but its physical meaning was not made clear. Then, a simple model was assumed, from which a formula for calculating the efficiency was theoretically derived. Some parameters in the formula were determined so that the calculated results might fit the experimental data. The above empirical formula was shown to be the same as the formula physically derived this time. Results from the semiempirical formula and experimental data for Al and Pb sample were in agreement within the limits of 5%. (author)

  17. Empirical comparison of theories

    International Nuclear Information System (INIS)

    Opp, K.D.; Wippler, R.

    1990-01-01

    The book represents the first, comprehensive attempt to take an empirical approach for comparative assessment of theories in sociology. The aims, problems, and advantages of the empirical approach are discussed in detail, and the three theories selected for the purpose of this work are explained. Their comparative assessment is performed within the framework of several research projects, which among other subjects also investigate the social aspects of the protest against nuclear power plants. The theories analysed in this context are the theory of mental incongruities and that of the benefit, and their efficiency in explaining protest behaviour is compared. (orig./HSCH) [de

  18. Teaching Empirical Software Engineering Using Expert Teams

    DEFF Research Database (Denmark)

    Kuhrmann, Marco

    2017-01-01

    Empirical software engineering aims at making software engineering claims measurable, i.e., to analyze and understand phenomena in software engineering and to evaluate software engineering approaches and solutions. Due to the involvement of humans and the multitude of fields for which software...... is crucial, software engineering is considered hard to teach. Yet, empirical software engineering increases this difficulty by adding the scientific method as extra dimension. In this paper, we present a Master-level course on empirical software engineering in which different empirical instruments...... an extra specific expertise that they offer as service to other teams, thus, fostering cross-team collaboration. The paper outlines the general course setup, topics addressed, and it provides initial lessons learned....

  19. Meta-Analysis and Cost Comparison of Empirical versus Pre-Emptive Antifungal Strategies in Hematologic Malignancy Patients with High-Risk Febrile Neutropenia.

    Directory of Open Access Journals (Sweden)

    Monica Fung

    Full Text Available Invasive fungal disease (IFD causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN. These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients.We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates.Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85 and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87 or overall mortality (RR 0.95, 95% CI 0.46-1.99. The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing.Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality

  20. Meta-Analysis and Cost Comparison of Empirical versus Pre-Emptive Antifungal Strategies in Hematologic Malignancy Patients with High-Risk Febrile Neutropenia.

    Science.gov (United States)

    Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia

    2015-01-01

    Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We

  1. Empirical modeling of drying kinetics and microwave assisted extraction of bioactive compounds from Adathoda vasica

    Directory of Open Access Journals (Sweden)

    Prithvi Simha

    2016-03-01

    Full Text Available To highlight the shortcomings in conventional methods of extraction, this study investigates the efficacy of Microwave Assisted Extraction (MAE toward bioactive compound recovery from pharmaceutically-significant medicinal plants, Adathoda vasica and Cymbopogon citratus. Initially, the microwave (MW drying behavior of the plant leaves was investigated at different sample loadings, MW power and drying time. Kinetics was analyzed through empirical modeling of drying data against 10 conventional thin-layer drying equations that were further improvised through the incorporation of Arrhenius, exponential and linear-type expressions. 81 semi-empirical Midilli equations were derived and subjected to non-linear regression to arrive at the characteristic drying equations. Bioactive compounds recovery from the leaves was examined under various parameters through a comparative approach that studied MAE against Soxhlet extraction. MAE of A. vasica reported similar yields although drastic reduction in extraction time (210 s as against the average time of 10 h in the Soxhlet apparatus. Extract yield for MAE of C. citratus was higher than the conventional process with optimal parameters determined to be 20 g sample load, 1:20 sample/solvent ratio, extraction time of 150 s and 300 W output power. Scanning Electron Microscopy and Fourier Transform Infrared Spectroscopy were performed to depict changes in internal leaf morphology.

  2. Umayyad Relations with Byzantium Empire

    Directory of Open Access Journals (Sweden)

    Mansoor Haidari

    2017-06-01

    Full Text Available This research investigates the political and military relations between Umayyad caliphates with the Byzantine Empire. The aim of this research is to clarify Umayyad caliphate’s relations with the Byzantine Empire. We know that these relations were mostly about war and fight. Because there were always intense conflicts between Muslims and the Byzantine Empire, they had to have an active continuous diplomacy to call truce and settle the disputes. Thus, based on the general policy of the Umayyad caliphs, Christians were severely ignored and segregated within Islamic territories. This segregation of the Christians was highly affected by political relationships. It is worthy of mentioning that Umayyad caliphs brought the governing style of the Sassanid kings and Roman Caesar into the Islamic Caliphate system but they didn’t establish civil institutions and administrative organizations.

  3. Empirical training for conditional random fields

    NARCIS (Netherlands)

    Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas

    2013-01-01

    In this paper (Zhu et al., 2013), we present a practi- cally scalable training method for CRFs called Empir- ical Training (EP). We show that the standard train- ing with unregularized log likelihood can have many maximum likelihood estimations (MLEs). Empirical training has a unique closed form MLE

  4. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  5. Empirical Phenomenology: A Qualitative Research Approach (The ...

    African Journals Online (AJOL)

    Empirical Phenomenology: A Qualitative Research Approach (The Cologne Seminars) ... and practical application of empirical phenomenology in social research. ... and considers its implications for qualitative methods such as interviewing ...

  6. Empirical logic and quantum mechanics

    International Nuclear Information System (INIS)

    Foulis, D.J.; Randall, C.H.

    1976-01-01

    This article discusses some of the basic notions of quantum physics within the more general framework of operational statistics and empirical logic (as developed in Foulis and Randall, 1972, and Randall and Foulis, 1973). Empirical logic is a formal mathematical system in which the notion of an operation is primitive and undefined; all other concepts are rigorously defined in terms of such operations (which are presumed to correspond to actual physical procedures). (Auth.)

  7. An Approach to Determine the Weibull Parameters and Wind Power Analysis of Saint Martin’s Island, Bangladesh

    Directory of Open Access Journals (Sweden)

    Islam Khandaker Dahirul

    2016-01-01

    Full Text Available This paper explores wind speed distribution using Weibull probability distribution and Rayleigh distribution methods that are proven to provide accurate and efficient estimation of energy output in terms of wind energy conversion systems. Two parameters of Weibull (shape and scale parameters k and c respectively and scale parameter of Rayleigh distribution have been determined based on hourly time-series wind speed data recorded from October 2014 to October 2015 at Saint Martin’s island, Bangladesh. This research has been carried out to examine three numerical methods namely Graphical Method (GM, Empirical Method (EM, Energy Pattern Factor method (EPF to estimate Weibull parameters. Also, Rayleigh distribution method has been analyzed throughout the study. The results in the research revealed that the Graphical method followed by Empirical method and Energy Pattern Factor method were the most accurate and efficient way for determining the value of k and c to approximate wind speed distribution in terms of estimating power error. Rayleigh distribution gives the most power error in the research. Potential for wind energy development in Saint Martin’s island, Bangladesh as found from the data analysis has been explained in this paper.

  8. Empirical Moral Philosophy and Teacher Education

    Science.gov (United States)

    Schjetne, Espen; Afdal, Hilde Wågsås; Anker, Trine; Johannesen, Nina; Afdal, Geir

    2016-01-01

    In this paper, we explore the possible contributions of empirical moral philosophy to professional ethics in teacher education. We argue that it is both possible and desirable to connect knowledge of how teachers empirically do and understand professional ethics with normative theories of teachers' professional ethics. Our argument is made in…

  9. Empirical fit to inelastic electron-deuteron and electron-neutron resonance region transverse cross sections

    International Nuclear Information System (INIS)

    Bosted, P. E.; Christy, M. E.

    2008-01-01

    An empirical fit is described to measurements of inclusive inelastic electron-deuteron cross sections in the kinematic range of four-momentum transfer 0≤Q 2 2 and final state invariant mass 1.1 p of longitudinal to transverse cross sections for the proton, and the assumption R p =R n . The underlying fit parameters describe the average cross section for a free proton and a free neutron, with a plane-wave impulse approximation used to fit to the deuteron data. Additional fit parameters are used to fill in the dip between the quasi-elastic peak and the Δ(1232) resonance. The mean deviation of data from the fit is 3%, with less than 4% of the data points deviating from the fit by more than 10%

  10. Autobiography After Empire

    DEFF Research Database (Denmark)

    Rasch, Astrid

    of the collective, but insufficient attention has been paid to how individuals respond to such narrative changes. This dissertation examines the relationship between individual and collective memory at the end of empire through analysis of 13 end of empire autobiographies by public intellectuals from Australia......Decolonisation was a major event of the twentieth century, redrawing maps and impacting on identity narratives around the globe. As new nations defined their place in the world, the national and imperial past was retold in new cultural memories. These developments have been studied at the level......, the Anglophone Caribbean and Zimbabwe. I conceive of memory as reconstructive and social, with individual memory striving to make sense of the past in the present in dialogue with surrounding narratives. By examining recurring tropes in the autobiographies, like colonial education, journeys to the imperial...

  11. On the effect of response transformations in sequential parameter optimization.

    Science.gov (United States)

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  12. Empirical modeling of high-intensity electron beam interaction with materials

    Science.gov (United States)

    Koleva, E.; Tsonevska, Ts; Mladenov, G.

    2018-03-01

    The paper proposes an empirical modeling approach to the prediction followed by optimization of the exact shape of the cross-section of a welded seam, as obtained by electron beam welding. The approach takes into account the electron beam welding process parameters, namely, electron beam power, welding speed, and distances from the magnetic lens of the electron gun to the focus position of the beam and to the surface of the samples treated. The results are verified by comparison with experimental results for type 1H18NT stainless steel samples. The ranges considered of the beam power and the welding speed are 4.2 – 8.4 kW and 3.333 – 13.333 mm/s, respectively.

  13. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  14. A computer program for lattice-dynamical evaluation of Debye-Waller factors and thermodynamic functions for minerals, starting from empirical force fields

    International Nuclear Information System (INIS)

    Pilati, T.; Dermartin, F.; Gramaccioli, C.M.

    1993-01-01

    A wide-purpose computer program has been written (Fortran) for lattice dynamical evaluation of crystallographic and thermodynamic properties of solids, especially minerals or inorganic substances.The program essentially consists of a routine affording first and second derivatives of energy with respect to mass weighted coordinates, properly modulated by a wave vector algorithm, so that diagonalization can immediately follow and arrive at frequencies, density of states, and eventually to thermodynamic functions and Debye-Waller parameters thorough an automatic Brillouin-zone sampling procedure. The input consists of crystallographic data (unit-cell parameters, space group symmetry operations, atomic coordinates), plus atomic charge and empirical parameters, such as force constants or non-bonded atom-atom interaction energy functions in almost any form. It is also possible to obtain the structure corresponding to the energy minimum, or even to work with partial rigid bodies, in order to reduce the order of the dynamical matrices. The program provides for automatic symmetry labelling of the vibrational modes, in order to compare them with the experimental data; there is possibility of improving the empirical functions through a minimization routine. Examples of application and transferability of force fields to a series of minerals are provided. (author)

  15. Empirical scoring functions for advanced protein-ligand docking with PLANTS.

    Science.gov (United States)

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2009-01-01

    In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.

  16. Damping parameter study of a perforated plate with bias flow

    Science.gov (United States)

    Mazdeh, Alireza

    role of LES for research studies concerned with damping properties of liners is limited to validation of other empirical or theoretical approaches. This research has shown that LES can go beyond that and can be used for performing parametric studies to characterize the sensitivity of acoustic properties of multi--perforated liners to the changes in the geometry and flow conditions and be used as a tool to design acoustic liners. The conducted research provides an insightful understanding about the contribution of different flow and geometry parameters such as perforated plate thickness, aperture radius, porosity factors and bias flow velocity. While the study agrees with previous observations obtained by analytical or experimental methods, it also quantifies the impact from these parameters on the acoustic impedance of perforated plate, a key parameter to determine the acoustic performance of any system. The conducted study has also explored the limitations and capabilities of commercial tool when are applied for performing simulation studies on damping properties of liners. The overall agreement between LES results and previous studies proves that commercial tools can be effectively used for these applications under certain conditions.

  17. Effect of Small Numbers of Test Results on Accuracy of Hoek-Brown Strength Parameter Estimations: A Statistical Simulation Study

    Science.gov (United States)

    Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.

    2017-12-01

    The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.

  18. Empirical Research In Engineering Design

    DEFF Research Database (Denmark)

    Ahmed, Saeema

    2007-01-01

    Increasingly engineering design research involves the use of empirical studies that are conducted within an industrial environment [Ahmed, 2001; Court 1995; Hales 1987]. Research into the use of information by designers or understanding how engineers build up experience are examples of research...... of research issues. This paper describes case studies of empirical research carried out within industry in engineering design focusing upon information, knowledge and experience in engineering design. The paper describes the research methods employed, their suitability for the particular research aims...

  19. Optimization of control parameters of a hot cold controller by means of Simplex type methods

    Science.gov (United States)

    Porte, C.; Caron-Poussin, M.; Carot, S.; Couriol, C.; Moreno, M. Martin; Delacroix, A.

    1997-01-01

    This paper describes a hot/cold controller for regulating crystallization operations. The system was identified with a common method (the Broida method) and the parameters were obtained by the Ziegler-Nichols method. The paper shows that this empirical method will only allow a qualitative approach to regulation and that, in some instances, the parameters obtained are unreliable and therefore cannot be used to cancel variations between the set point and the actual values. Optimization methods were used to determine the regulation parameters and solve this identcation problem. It was found that the weighted centroid method was the best one. PMID:18924791

  20. On the Sophistication of Naïve Empirical Reasoning: Factors Influencing Mathematicians' Persuasion Ratings of Empirical Arguments

    Science.gov (United States)

    Weber, Keith

    2013-01-01

    This paper presents the results of an experiment in which mathematicians were asked to rate how persuasive they found two empirical arguments. There were three key results from this study: (a) Participants judged an empirical argument as more persuasive if it verified that integers possessed an infrequent property than if it verified that integers…

  1. Two concepts of empirical ethics.

    Science.gov (United States)

    Parker, Malcolm

    2009-05-01

    The turn to empirical ethics answers two calls. The first is for a richer account of morality than that afforded by bioethical principlism, which is cast as excessively abstract and thin on the facts. The second is for the facts in question to be those of human experience and not some other, unworldly realm. Empirical ethics therefore promises a richer naturalistic ethics, but in fulfilling the second call it often fails to heed the metaethical requirements related to the first. Empirical ethics risks losing the normative edge which necessarily characterizes the ethical, by failing to account for the nature and the logic of moral norms. I sketch a naturalistic theory, teleological expressivism (TE), which negotiates the naturalistic fallacy by providing a more satisfactory means of taking into account facts and research data with ethical implications. The examples of informed consent and the euthanasia debate are used to illustrate the superiority of this approach, and the problems consequent on including the facts in the wrong kind of way.

  2. Influence of Cutting Parameters on the Surface Roughness and Hole Diameter of Drilling Making Parts of Alluminium Alloy

    Directory of Open Access Journals (Sweden)

    Andrius Stasiūnas

    2013-02-01

    Full Text Available The article researches the drilling process of an aluminium alloy. The paper is aimed at analyzing the influence of cutting speed, feed and hole depth considering hole diameter and hole surface roughness of aluminum alloy 6082 in the dry drilling process and at making empirical formulas for cutting parameters. The article also describes experimental techniques and equipment, tools and measuring devices. Experimental studies have been carried out using different cutting parameters. The obtained results have been analyzed using computer software. According to the existing techniques for measuring, surface roughness and hole diameters have been measured, empirical models have been created and the results of the conducted experiments have been inspected. The findings and recommendations are presented at the end of the work.Artcile in Lithuanian

  3. A Parameter-based Model for Generating Culturally Adaptive Nonverbal Behaviors in Embodied Conversational Agents

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2009-01-01

    The goal of this paper is to integrate culture as a computational term in embodied conversational agents by employing an empirical data-driven approach as well as a theoretical model-driven approach. We propose a parameter-based model that predicts nonverbal expressions appropriate for specific...... cultures. First, we introduce the Hofstede theory to describe socio-cultural characteristics of each country. Then, based on the previous studies in cultural differences of nonverbal behaviors, we propose expressive parameters to characterize nonverbal behaviors. Finally, by integrating socio-cultural...

  4. Empirical study on social groups in pedestrian evacuation dynamics

    Science.gov (United States)

    von Krüchten, Cornelia; Schadschneider, Andreas

    2017-06-01

    Pedestrian crowds often include social groups, i.e. pedestrians that walk together because of social relationships. They show characteristic configurations and influence the dynamics of the entire crowd. In order to investigate the impact of social groups on evacuations we performed an empirical study with pupils. Several evacuation runs with groups of different sizes and different interactions were performed. New group parameters are introduced which allow to describe the dynamics of the groups and the configuration of the group members quantitatively. The analysis shows a possible decrease of evacuation times for large groups due to self-ordering effects. Social groups can be approximated as ellipses that orientate along their direction of motion. Furthermore, explicitly cooperative behaviour among group members leads to a stronger aggregation of group members and an intermittent way of evacuation.

  5. A Time-dependent Heliospheric Model Driven by Empirical Boundary Conditions

    Science.gov (United States)

    Kim, T. K.; Arge, C. N.; Pogorelov, N. V.

    2017-12-01

    Consisting of charged particles originating from the Sun, the solar wind carries the Sun's energy and magnetic field outward through interplanetary space. The solar wind is the predominant source of space weather events, and modeling the solar wind propagation to Earth is a critical component of space weather research. Solar wind models are typically separated into coronal and heliospheric parts to account for the different physical processes and scales characterizing each region. Coronal models are often coupled with heliospheric models to propagate the solar wind out to Earth's orbit and beyond. The Wang-Sheeley-Arge (WSA) model is a semi-empirical coronal model consisting of a potential field source surface model and a current sheet model that takes synoptic magnetograms as input to estimate the magnetic field and solar wind speed at any distance above the coronal region. The current version of the WSA model takes the Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model as input to provide improved time-varying solutions for the ambient solar wind structure. When heliospheric MHD models are coupled with the WSA model, density and temperature at the inner boundary are treated as free parameters that are tuned to optimal values. For example, the WSA-ENLIL model prescribes density and temperature assuming momentum flux and thermal pressure balance across the inner boundary of the ENLIL heliospheric MHD model. We consider an alternative approach of prescribing density and temperature using empirical correlations derived from Ulysses and OMNI data. We use our own modeling software (Multi-scale Fluid-kinetic Simulation Suite) to drive a heliospheric MHD model with ADAPT-WSA input. The modeling results using the two different approaches of density and temperature prescription suggest that the use of empirical correlations may be a more straightforward, consistent method.

  6. Tuning a space-time scalable PI controller using thermal parameters

    Energy Technology Data Exchange (ETDEWEB)

    Riverol, C. [University of West Indies, Chemical Engineering Department, St. Augustine, Trinidad (Trinidad and Tobago); Pilipovik, M.V. [Armach Engineers, Urb. Los Palos Grandes, Project Engineering Department, Caracas (Venezuela)

    2005-03-01

    The paper outlines the successful empirical design and validation of a space-time PI controller based on study of the controlled variable output as function of time and space. The developed control was implemented on two heat exchanger systems (falling film evaporator and milk pasteurizer). The strategy required adding a new term over the classical PI controller, such that a new parameter should be tuned. Measurements made on commercial installations have confirmed the validity of the new controller. (orig.)

  7. Trade and Empire

    DEFF Research Database (Denmark)

    Bang, Peter Fibiger

    2007-01-01

    This articles seeks to establish a new set of organizing concepts for the analysis of the Roman imperial economy from Republic to late antiquity: tributary empire, port-folio capitalism and protection costs. Together these concepts explain better economic developments in the Roman world than the...

  8. Quantitative analyses of empirical fitness landscapes

    International Nuclear Information System (INIS)

    Szendro, Ivan G; Franke, Jasper; Krug, Joachim; Schenk, Martijn F; De Visser, J Arjan G M

    2013-01-01

    The concept of a fitness landscape is a powerful metaphor that offers insight into various aspects of evolutionary processes and guidance for the study of evolution. Until recently, empirical evidence on the ruggedness of these landscapes was lacking, but since it became feasible to construct all possible genotypes containing combinations of a limited set of mutations, the number of studies has grown to a point where a classification of landscapes becomes possible. The aim of this review is to identify measures of epistasis that allow a meaningful comparison of fitness landscapes and then apply them to the empirical landscapes in order to discern factors that affect ruggedness. The various measures of epistasis that have been proposed in the literature appear to be equivalent. Our comparison shows that the ruggedness of the empirical landscape is affected by whether the included mutations are beneficial or deleterious and by whether intragenic or intergenic epistasis is involved. Finally, the empirical landscapes are compared to landscapes generated with the rough Mt Fuji model. Despite the simplicity of this model, it captures the features of the experimental landscapes remarkably well. (paper)

  9. Empirical pseudo-potential studies on electronic structure

    Indian Academy of Sciences (India)

    Theoretical investigations of electronic structure of quantum dots is of current interest in nanophase materials. Empirical theories such as effective mass approximation, tight binding methods and empirical pseudo-potential method are capable of explaining the experimentally observed optical properties. We employ the ...

  10. An Empirical Taxonomy of Crowdfunding Intermediaries

    OpenAIRE

    Haas, Philipp; Blohm, Ivo; Leimeister, Jan Marco

    2014-01-01

    Due to the recent popularity of crowdfunding, a broad magnitude of crowdfunding intermediaries has emerged, while research on crowdfunding intermediaries has been largely neglected. As a consequence, existing classifications of crowdfunding intermediaries are conceptual, lack theoretical grounding, and are not empirically validated. Thus, we develop an empirical taxonomy of crowdfunding intermediaries, which is grounded in the theories of two-sided markets and financial intermediation. Integr...

  11. X-ray spectral study of the Th6p,5f electron states in ThO2 and ThF4

    International Nuclear Information System (INIS)

    Teterin, Y.A.; Nikitin, A.S.; Teterin, A.Y.; Ivanov, K.E.; Utkin, I.O.; Nerehov, V.A.; Ryzhkov, M.V.; Vukchevich, I.J.

    2002-01-01

    The study of the Th6p,5f electron states in Th, ThO 2 and ThF was carried out on the basis of the X-ray photoelectron fine spectral structure parameters in the binding energy range of 0-∼ 1000 eV, X-ray O 4,5 (Th) emission spectra of the shallow (0-∼50 eV) electrons and results of theoretical calculations. As a result, despite the absence of the Th5f electrons in thorium atoms, the Th5f atomic orbitals were established to participate in the formation of molecular orbitals in thorium dioxide and tetrafluoride. In the MO LCAO approximation this allowed to suggest the possible existence of filled Th5f electronic states in thorium compounds. On the basis of the X-ray O 4,5 (Th) emission spectral structure parameters the effective formation of the inner valence molecular orbitals in the studied compounds was confirmed. (authors)

  12. A sensitivity analysis of centrifugal compressors' empirical models

    International Nuclear Information System (INIS)

    Yoon, Sung Ho; Baek, Je Hyun

    2001-01-01

    The mean-line method using empirical models is the most practical method of predicting off-design performance. To gain insight into the empirical models, the influence of empirical models on the performance prediction results is investigated. We found that, in the two-zone model, the secondary flow mass fraction has a considerable effect at high mass flow-rates on the performance prediction curves. In the TEIS model, the first element changes the slope of the performance curves as well as the stable operating range. The second element makes the performance curves move up and down as it increases or decreases. It is also discovered that the slip factor affects pressure ratio, but it has little effect on efficiency. Finally, this study reveals that the skin friction coefficient has significant effect on both the pressure ratio curve and the efficiency curve. These results show the limitations of the present empirical models, and more reasonable empirical models are reeded

  13. User's manual for DWNWND: an interactive Gaussian plume atmospheric transport model with eight dispersion parameter options

    International Nuclear Information System (INIS)

    Fields, D.E.; Miller, C.W.

    1980-05-01

    The most commonly used approach for estimating the atmospheric concentration and deposition of material downwind from its point of release is the Gaussian plume atmospheric dispersion model. Two of the critical parameters in this model are sigma/sub y/ and sigma/sub z/, the horizontal and vertical dispersion parameters, respectively. A number of different sets of values for sigma/sub y/ and sigma/sub z/ have been determined empirically for different release heights and meteorological and terrain conditions. The computer code DWNWND, described in this report, is an interactive implementation of the Gaussian plume model. This code allows the user to specify any one of eight different sets of the empirically determined dispersion paramters. Using the selected dispersion paramters, ground-level normalized exposure estimates are made at any specified downwind distance. Computed values may be corrected for plume depletion due to deposition and for plume settling due to gravitational fall. With this interactive code, the user chooses values for ten parameters which define the source, the dispersion and deposition process, and the sampling point. DWNWND is written in FORTRAN for execution on a PDP-10 computer, requiring less than one second of central processor unit time for each simulation

  14. Assessing different parameters estimation methods of Weibull distribution to compute wind power density

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi

    2016-01-01

    Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.

  15. Sensor Data Acquisition and Processing Parameters for Human Activity Classification

    Directory of Open Access Journals (Sweden)

    Sebastian D. Bersch

    2014-03-01

    Full Text Available It is known that parameter selection for data sampling frequency and segmentation techniques (including different methods and window sizes has an impact on the classification accuracy. For Ambient Assisted Living (AAL, no clear information to select these parameters exists, hence a wide variety and inconsistency across today’s literature is observed. This paper presents the empirical investigation of different data sampling rates, segmentation techniques and segmentation window sizes and their effect on the accuracy of Activity of Daily Living (ADL event classification and computational load for two different accelerometer sensor datasets. The study is conducted using an ANalysis Of VAriance (ANOVA based on 32 different window sizes, three different segmentation algorithm (with and without overlap, totaling in six different parameters and six sampling frequencies for nine common classification algorithms. The classification accuracy is based on a feature vector consisting of Root Mean Square (RMS, Mean, Signal Magnitude Area (SMA, Signal Vector Magnitude (here SMV, Energy, Entropy, FFTPeak, Standard Deviation (STD. The results are presented alongside recommendations for the parameter selection on the basis of the best performing parameter combinations that are identified by means of the corresponding Pareto curve.

  16. Social opportunity cost of capital: empirical estimates

    Energy Technology Data Exchange (ETDEWEB)

    Townsend, S.

    1978-02-01

    This report develops estimates of the social-opportunity cost of public capital. The private and social costs of capital are found to diverge primarily because of the effects of corporate and personal income taxes. Following Harberger, the social-opportunity cost of capital is approximated by a weighted average of the returns to different classes of savers and investors where the weights are the flows of savings or investments in each class multiplied by the relevant elasticity. Estimates of these parameters are obtained and the social-opportunity cost of capital is determined to be in the range of 6.2 to 10.8%, depending upon the parameter values used. Uncertainty is found to affect the social-opportunity cost of capital in two ways. First, some allowance must be made for the chance of failure or at least of not realizing claims of a project's proponents. Second, a particular government project will change the expected variability of the returns to the government's entire portfolio of projects. In the absence of specific information about each project, the use of the economy-wide average default and risk adjustments is suggested. These are included in the empirical estimates reported. International capital markets make available private capital, the price of which is not distorted by the U.S. tax system. The inclusion of foreign sources slightly reduces the social-opportunity cost of capital. 21 references.

  17. Gazprom: the new empire

    International Nuclear Information System (INIS)

    Guillemoles, A.; Lazareva, A.

    2008-01-01

    Gazprom is conquering the world. The Russian industrial giant owns the hugest gas reserves and enjoys the privilege of a considerable power. Gazprom edits journals, owns hospitals, airplanes and has even built cities where most of the habitants work for him. With 400000 workers, Gazprom represents 8% of Russia's GDP. This inquiry describes the history and operation of this empire and show how its has become a masterpiece of the government's strategy of russian influence reconquest at the world scale. Is it going to be a winning game? Are the corruption affairs and the expected depletion of resources going to weaken the empire? The authors shade light on the political and diplomatic strategies that are played around the crucial dossier of the energy supply. (J.S.)

  18. Inference of directional selection and mutation parameters assuming equilibrium.

    Science.gov (United States)

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Semi-empirical model for the calculation of flow friction factors in wire-wrapped rod bundles

    International Nuclear Information System (INIS)

    Carajilescov, P.; Fernandez y Fernandez, E.

    1981-08-01

    LMFBR fuel elements consist of wire-wrapped rod bundles, with triangular array, with the fluid flowing parallel to the rods. A semi-empirical model is developed in order to obtain the average bundle friction factor, as well as the friction factor for each subchannel. The model also calculates the flow distribution factors. The results are compared to experimental data for geometrical parameters in the range: P(div)D = 1.063 - 1.417, H(div)D = 4 - 50, and are considered satisfactory. (Author) [pt

  20. Intermodal connectivity in Europe, an empirical exploration

    NARCIS (Netherlands)

    de Langen, P.W.; Lases Figueroa, D.M.; van Donselaar, K.H.; Bozuwa, J.

    2017-01-01

    In this paper we analyse the intermodal connectivity in Europe. The empirical analysis is to our knowledge the first empirical analysis of intermodal connections, and is based on a comprehensive database of intermodal connections in Europe. The paper focuses on rail and barge services, as they are

  1. The emerging empirics of evolutionary economic geography

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2011-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-a`-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and

  2. The emerging empirics of evolutionary economic geography

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2010-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-à-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and how

  3. The emerging empirics of evolutionary economic geography.

    NARCIS (Netherlands)

    Boschma, R.A.; Frenken, K.

    2011-01-01

    Following last decade’s programmatic papers on Evolutionary Economic Geography, we report on recent empirical advances and how this empirical work can be positioned vis-a`-vis other strands of research in economic geography. First, we review studies on the path dependent nature of clustering, and

  4. Estimation of CN Parameter for Small Agricultural Watersheds Using Asymptotic Functions

    OpenAIRE

    Tomasz Kowalik; Andrzej Walega

    2015-01-01

    This paper investigates a possibility of using asymptotic functions to determine the value of curve number (CN) parameter as a function of rainfall in small agricultural watersheds. It also compares the actually calculated CN with its values provided in the Soil Conservation Service (SCS) National Engineering Handbook Section 4: Hydrology (NEH-4) and Technical Release 20 (TR-20). The analysis showed that empirical CN values presented in the National Engineering Handbook tables differed from t...

  5. High-resolution empirical geomagnetic field model TS07D: Investigating run-on-request and forecasting modes of operation

    Science.gov (United States)

    Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.

    2010-12-01

    The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.

  6. An Improved Semi-Empirical Model for Radar Backscattering from Rough Sea Surfaces at X-Band

    Directory of Open Access Journals (Sweden)

    Taekyeong Jin

    2018-04-01

    Full Text Available We propose an improved semi-empirical scattering model for X-band radar backscattering from rough sea surfaces. This new model has a wider validity range of wind speeds than does the existing semi-empirical sea spectrum (SESS model. First, we retrieved the small-roughness parameters from the sea surfaces, which were numerically generated using the Pierson-Moskowitz spectrum and measurement datasets for various wind speeds. Then, we computed the backscattering coefficients of the small-roughness surfaces for various wind speeds using the integral equation method model. Finally, the large-roughness characteristics were taken into account by integrating the small-roughness backscattering coefficients multiplying them with the surface slope probability density function for all possible surface slopes. The new model includes a wind speed range below 3.46 m/s, which was not covered by the existing SESS model. The accuracy of the new model was verified with two measurement datasets for various wind speeds from 0.5 m/s to 14 m/s.

  7. FARIMA MODELING OF SOLAR FLARE ACTIVITY FROM EMPIRICAL TIME SERIES OF SOFT X-RAY SOLAR EMISSION

    International Nuclear Information System (INIS)

    Stanislavsky, A. A.; Burnecki, K.; Magdziarz, M.; Weron, A.; Weron, K.

    2009-01-01

    A time series of soft X-ray emission observed by the Geostationary Operational Environment Satellites from 1974 to 2007 is analyzed. We show that in the solar-maximum periods the energy distribution of soft X-ray solar flares for C, M, and X classes is well described by a fractional autoregressive integrated moving average model with Pareto noise. The model incorporates two effects detected in our empirical studies. One effect is a long-term dependence (long-term memory), and another corresponds to heavy-tailed distributions. The parameters of the model: self-similarity exponent H, tail index α, and memory parameter d are statistically stable enough during the periods 1977-1981, 1988-1992, 1999-2003. However, when the solar activity tends to minimum, the parameters vary. We discuss the possible causes of this evolution and suggest a statistically justified model for predicting the solar flare activity.

  8. Methods for Calculating Empires in Quasicrystals

    Directory of Open Access Journals (Sweden)

    Fang Fang

    2017-10-01

    Full Text Available This paper reviews the empire problem for quasiperiodic tilings and the existing methods for generating the empires of the vertex configurations in quasicrystals, while introducing a new and more efficient method based on the cut-and-project technique. Using Penrose tiling as an example, this method finds the forced tiles with the restrictions in the high dimensional lattice (the mother lattice that can be cut-and-projected into the lower dimensional quasicrystal. We compare our method to the two existing methods, namely one method that uses the algorithm of the Fibonacci chain to force the Ammann bars in order to find the forced tiles of an empire and the method that follows the work of N.G. de Bruijn on constructing a Penrose tiling as the dual to a pentagrid. This new method is not only conceptually simple and clear, but it also allows us to calculate the empires of the vertex configurations in a defected quasicrystal by reversing the configuration of the quasicrystal to its higher dimensional lattice, where we then apply the restrictions. These advantages may provide a key guiding principle for phason dynamics and an important tool for self error-correction in quasicrystal growth.

  9. An empirical study on the utility of BRDF model parameters and topographic parameters for mapping vegetation in a semi-arid region with MISR imagery

    Science.gov (United States)

    Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...

  10. Biomass viability: An experimental study and the development of an empirical mathematical model for submerged membrane bioreactor.

    Science.gov (United States)

    Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X

    2015-08-01

    This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  12. An empirical analysis of Diaspora bonds

    OpenAIRE

    AKKOYUNLU, Şule; STERN, Max

    2018-01-01

    Abstract. This study is the first to investigate theoretically and empirically the determinants of Diaspora Bonds for eight developing countries (Bangladesh, Ethiopia, Ghana, India, Lebanon, Pakistan, the Philippines, and Sri-Lanka) and one developed country - Israel for the period 1951 and 2008. Empirical results are consistent with the predictions of the theoretical model. The most robust variables are the closeness indicator and the sovereign rating, both on the demand-side. The spread is ...

  13. How rational should bioethics be? The value of empirical approaches.

    Science.gov (United States)

    Alvarez, A A

    2001-10-01

    Rational justification of claims with empirical content calls for empirical and not only normative philosophical investigation. Empirical approaches to bioethics are epistemically valuable, i.e., such methods may be necessary in providing and verifying basic knowledge about cultural values and norms. Our assumptions in moral reasoning can be verified or corrected using these methods. Moral arguments can be initiated or adjudicated by data drawn from empirical investigation. One may argue that individualistic informed consent, for example, is not compatible with the Asian communitarian orientation. But this normative claim uses an empirical assumption that may be contrary to the fact that some Asians do value and argue for informed consent. Is it necessary and factual to neatly characterize some cultures as individualistic and some as communitarian? Empirical investigation can provide a reasonable way to inform such generalizations. In a multi-cultural context, such as in the Philippines, there is a need to investigate the nature of the local ethos before making any appeal to authenticity. Otherwise we may succumb to the same ethical imperialism we are trying hard to resist. Normative claims that involve empirical premises cannot be reasonable verified or evaluated without utilizing empirical methods along with philosophical reflection. The integration of empirical methods to the standard normative approach to moral reasoning should be reasonably guided by the epistemic demands of claims arising from cross-cultural discourse in bioethics.

  14. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  15. Sensitivities of surface wave velocities to the medium parameters in a radially anisotropic spherical Earth and inversion strategies

    Directory of Open Access Journals (Sweden)

    Sankar N. Bhattacharya

    2015-11-01

    Full Text Available Sensitivity kernels or partial derivatives of phase velocity (c and group velocity (U with respect to medium parameters are useful to interpret a given set of observed surface wave velocity data. In addition to phase velocities, group velocities are also being observed to find the radial anisotropy of the crust and mantle. However, sensitivities of group velocity for a radially anisotropic Earth have rarely been studied. Here we show sensitivities of group velocity along with those of phase velocity to the medium parameters VSV, VSH , VPV, VPH , h and density in a radially anisotropic spherical Earth. The peak sensitivities for U are generally twice of those for c; thus U is more efficient than c to explore anisotropic nature of the medium. Love waves mainly depends on VSH while Rayleigh waves is nearly independent of VSH . The sensitivities show that there are trade-offs among these parameters during inversion and there is a need to reduce the number of parameters to be evaluated independently. It is suggested to use a nonlinear inversion jointly for Rayleigh and Love waves; in such a nonlinear inversion best solutions are obtained among the model parameters within prescribed limits for each parameter. We first choose VSH, VSV and VPH within their corresponding limits; VPV and h can be evaluated from empirical relations among the parameters. The density has small effect on surface wave velocities and it can be considered from other studies or from empirical relation of density to average P-wave velocity.

  16. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    Science.gov (United States)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  17. A semi-empirical formula for total cross sections of electron scattering from diatomic molecules

    International Nuclear Information System (INIS)

    Liu Yufang; Sun Jinfeng; Henan Normal Univ., Xinxiang

    1996-01-01

    A fitting formula based on the Born approximation is used to fit the total cross sections for electron scattering by diatomic molecules (CO, N 2 , NO, O 2 and HCl) in the intermediate- and high-energy range. By analyzing the fitted parameters and the total cross sections, we found that the internuclear distance of the constituent atoms plays an important role in the e-diatomic molecule collision process. Thus a new semi-empirical formula has been obtained. There is no free parameter in the formula, and the dependence of the total cross sections on the internuclear distance has been reflected clearly. The total cross sections for electron scattering by CO, N 2 , NO, O 2 and HCl have been calculated over an incident energy range of 10-4000 eV. The results agree well with other available experimental and calculation data. (orig.)

  18. Penicillin as empirical therapy for patients hospitalised with community acquired pneumonia at a Danish hospital

    DEFF Research Database (Denmark)

    Kirk, O; Glenthøj, Jonathan Peter; Dragsted, Ulrik Bak

    2001-01-01

    and outcome parameters were collected. Three groups were established according to the initial choice of antibiotic(s): penicillin only (n = 160); non-allergic patients starting broader spectrum therapy (n = 54); and patients with suspected penicillin allergy (n = 29). RESULTS: The overall mortality within...... treated with penicillin monotherapy. No differences in clinical outcomes were documented between patients treated empirically with broad-spectrum therapy and penicillin monotherapy. Therefore, penicillin seems to be a reasonable first choice for initial therapy of HCAP in Denmark as in other regions...

  19. The H+ molecule in strong magnetic fields

    International Nuclear Information System (INIS)

    Melo, L.C. de; Das, T.K.; Ferreira, R.; Miranda, L.C.M.; Brandi, H.S.

    1976-01-01

    A LCAO-MO treatment of the H 2 + based on hydrogen-like atomic orbitals is described. Trial wave functions to calculate binding energy and potential curves of H 2 + in the presence of magnetic fields in the range 10 8 G 10 G, are used [pt

  20. Empirical Bolometric Fluxes and Angular Diameters of 1.6 Million Tycho-2 Stars and Radii of 350,000 Stars with Gaia DR1 Parallaxes

    Science.gov (United States)

    Stevens, Daniel J.; Stassun, Keivan G.; Gaudi, B. Scott

    2017-12-01

    We present bolometric fluxes and angular diameters for over 1.6 million stars in the Tycho-2 catalog, determined using previously determined empirical color-temperature and color-flux relations. We vet these relations via full fits to the full broadband spectral energy distributions for a subset of benchmark stars and perform quality checks against the large set of stars for which spectroscopically determined parameters are available from LAMOST, RAVE, and/or APOGEE. We then estimate radii for the 355,502 Tycho-2 stars in our sample whose Gaia DR1 parallaxes are precise to ≲ 10 % . For these stars, we achieve effective temperature, bolometric flux, and angular diameter uncertainties of the order of 1%-2% and radius uncertainties of order 8%, and we explore the effect that imposing spectroscopic effective temperature priors has on these uncertainties. These stellar parameters are shown to be reliable for stars with {T}{eff} ≲ 7000 K. The over half a million bolometric fluxes and angular diameters presented here will serve as an immediate trove of empirical stellar radii with the Gaia second data release, at which point effective temperature uncertainties will dominate the radius uncertainties. Already, dwarf, subgiant, and giant populations are readily identifiable in our purely empirical luminosity-effective temperature (theoretical) Hertzsprung-Russell diagrams.

  1. Empirical study of classification process for two-stage turbo air classifier in series

    Science.gov (United States)

    Yu, Yuan; Liu, Jiaxiang; Li, Gang

    2013-05-01

    The suitable process parameters for a two-stage turbo air classifier are important for obtaining the ultrafine powder that has a narrow particle-size distribution, however little has been published internationally on the classification process for the two-stage turbo air classifier in series. The influence of the process parameters of a two-stage turbo air classifier in series on classification performance is empirically studied by using aluminum oxide powders as the experimental material. The experimental results show the following: 1) When the rotor cage rotary speed of the first-stage classifier is increased from 2 300 r/min to 2 500 r/min with a constant rotor cage rotary speed of the second-stage classifier, classification precision is increased from 0.64 to 0.67. However, in this case, the final ultrafine powder yield is decreased from 79% to 74%, which means the classification precision and the final ultrafine powder yield can be regulated through adjusting the rotor cage rotary speed of the first-stage classifier. 2) When the rotor cage rotary speed of the second-stage classifier is increased from 2 500 r/min to 3 100 r/min with a constant rotor cage rotary speed of the first-stage classifier, the cut size is decreased from 13.16 μm to 8.76 μm, which means the cut size of the ultrafine powder can be regulated through adjusting the rotor cage rotary speed of the second-stage classifier. 3) When the feeding speed is increased from 35 kg/h to 50 kg/h, the "fish-hook" effect is strengthened, which makes the ultrafine powder yield decrease. 4) To weaken the "fish-hook" effect, the equalization of the two-stage wind speeds or the combination of a high first-stage wind speed with a low second-stage wind speed should be selected. This empirical study provides a criterion of process parameter configurations for a two-stage or multi-stage classifier in series, which offers a theoretical basis for practical production.

  2. EbayesThresh: R Programs for Empirical Bayes Thresholding

    Directory of Open Access Journals (Sweden)

    Iain Johnstone

    2005-04-01

    Full Text Available Suppose that a sequence of unknown parameters is observed sub ject to independent Gaussian noise. The EbayesThresh package in the S language implements a class of Empirical Bayes thresholding methods that can take advantage of possible sparsity in the sequence, to improve the quality of estimation. The prior for each parameter in the sequence is a mixture of an atom of probability at zero and a heavy-tailed density. Within the package, this can be either a Laplace (double exponential density or else a mixture of normal distributions with tail behavior similar to the Cauchy distribution. The mixing weight, or sparsity parameter, is chosen automatically by marginal maximum likelihood. If estimation is carried out using the posterior median, this is a random thresholding procedure; the estimation can also be carried out using other thresholding rules with the same threshold, and the package provides the posterior mean, and hard and soft thresholding, as additional options. This paper reviews the method, and gives details (far beyond those previously published of the calculations needed for implementing the procedures. It explains and motivates both the general methodology, and the use of the EbayesThresh package, through simulated and real data examples. When estimating the wavelet transform of an unknown function, it is appropriate to apply the method level by level to the transform of the observed data. The package can carry out these calculations for wavelet transforms obtained using various packages in R and S-PLUS. Details, including a motivating example, are presented, and the application of the method to image estimation is also explored. The final topic considered is the estimation of a single sequence that may become progressively sparser along the sequence. An iterated least squares isotone regression method allows for the choice of a threshold that depends monotonically on the order in which the observations are made. An alternative

  3. Cyriax's deep friction massage application parameters: Evidence from a cross-sectional study with physiotherapists.

    Science.gov (United States)

    Chaves, Paula; Simões, Daniela; Paço, Maria; Pinho, Francisco; Duarte, José Alberto; Ribeiro, Fernando

    2017-12-01

    Deep friction massage is one of several physiotherapy interventions suggested for the management of tendinopathy. To determine the prevalence of deep friction massage use in clinical practice, to characterize the application parameters used by physiotherapists, and to identify empirical model-based patterns of deep friction massage application in degenerative tendinopathy. observational, analytical, cross-sectional and national web-based survey. 478 physiotherapists were selected through snow-ball sampling method. The participants completed an online questionnaire about personal and professional characteristics as well as specific questions regarding the use of deep friction massage. Characterization of deep friction massage parameters used by physiotherapists were presented as counts and proportions. Latent class analysis was used to identify the empirical model-based patterns. Crude and adjusted odds ratios and 95% confidence intervals were computed. The use of deep friction massage was reported by 88.1% of the participants; tendinopathy was the clinical condition where it was most frequently used (84.9%) and, from these, 55.9% reported its use in degenerative tendinopathy. The "duration of application" parameters in chronic phase and "frequency of application" in acute and chronic phases are those that diverge most from those recommended by the author of deep friction massage. We found a high prevalence of deep friction massage use, namely in degenerative tendinopathy. Our results have shown that the application parameters are heterogeneous and diverse. This is reflected by the identification of two application patterns, although none is in complete agreement with Cyriax's description. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Empirical Evidence from Kenya

    African Journals Online (AJOL)

    FIRST LADY

    2011-01-18

    Jan 18, 2011 ... Empirical results reveal that consumption of sugar in. Kenya varies ... experiences in trade in different regions of the world. Some studies ... To assess the relationship between domestic sugar retail prices and sugar sales in ...

  5. On selecting a prior for the precision parameter of Dirichlet process mixture models

    Science.gov (United States)

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  6. The influence of hyper-parameters in the infinite relational model

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2016-01-01

    the importance of these priors for discovering latent clusters and for predicting links. We compare fixed symmetric priors and fixed asymmetric priors based on the empirical distribution of links with a Bayesian hierarchical approach where the parameters of the priors are inferred from data. On synthetic data......, we show that the hierarchical Bayesian approach can infer the prior distributions used to generate the data. On real network data we demonstrate that using asymmetric priors significantly improves predictive performance and heavily influences the number of extracted partitions....

  7. Empirical analysis of uranium spot prices

    International Nuclear Information System (INIS)

    Morman, M.R.

    1988-01-01

    The objective is to empirically test a market model of the uranium industry that incorporates the notion that, if the resource is viewed as an asset by economic agents, then its own rate of return along with the own rate of return of a competing asset would be a major factor in formulating the price of the resource. The model tested is based on a market model of supply and demand. The supply model incorporates the notion that the decision criteria used by uranium mine owners is to select that extraction rate that maximizes the net present value of their extraction receipts. The demand model uses a concept that allows for explicit recognition of the prospect of arbitrage between a natural-resource market and the market for other capital goods. The empirical approach used for estimation was a recursive or causal model. The empirical results were consistent with the theoretical models. The coefficients of the demand and supply equations had the appropriate signs. Tests for causality were conducted to validate the use of the causal model. The results obtained were favorable. The implication of the findings as related to future studies of exhaustible resources are: (1) in some cases causal models are the appropriate specification for empirical analysis; (2) supply models should incorporate a measure to capture depletion effects

  8. Development and application of a three-parameter RK-PR equation of state

    DEFF Research Database (Denmark)

    Cismondi, Martin; Mollerup, Jørgen

    2005-01-01

    In this work, we confirm the somehow previously expressed but not widespread idea that the limitations of cubic equations of state like Soave-Redlich-Kwong equation (SRK) or Peng-Robinson equation (PR) are a consequence of their two-parameter density dependence rather than of their empiric......-PR) equation offers the best performance among cubic three-parameter density functionalities. A simple temperature dependence was developed and a straightforward parameterization procedure established. This simple - and optimized from pure compound data - three-parameter equation of state (3P-EoS) will allow...... in a later stage, by systematic study and comparison to other types of 3P-EoS, to find out what the actual possibilities and limitations of cubic EoS are in the modelling of phase equilibria for asymmetric systems. (c) 2005 Elsevier B.V. All rights reserved....

  9. Influence of Wire Electrical Discharge Machining (WEDM) process parameters on surface roughness

    Science.gov (United States)

    Yeakub Ali, Mohammad; Banu, Asfana; Abu Bakar, Mazilah

    2018-01-01

    In obtaining the best quality of engineering components, the quality of machined parts surface plays an important role. It improves the fatigue strength, wear resistance, and corrosion of workpiece. This paper investigates the effects of wire electrical discharge machining (WEDM) process parameters on surface roughness of stainless steel using distilled water as dielectric fluid and brass wire as tool electrode. The parameters selected are voltage open, wire speed, wire tension, voltage gap, and off time. Empirical model was developed for the estimation of surface roughness. The analysis revealed that off time has a major influence on surface roughness. The optimum machining parameters for minimum surface roughness were found to be at a 10 V open voltage, 2.84 μs off time, 12 m/min wire speed, 6.3 N wire tension, and 54.91 V voltage gap.

  10. Critical Realism and Empirical Bioethics: A Methodological Exposition.

    Science.gov (United States)

    McKeown, Alex

    2017-09-01

    This paper shows how critical realism can be used to integrate empirical data and philosophical analysis within 'empirical bioethics'. The term empirical bioethics, whilst appearing oxymoronic, simply refers to an interdisciplinary approach to the resolution of practical ethical issues within the biological and life sciences, integrating social scientific, empirical data with philosophical analysis. It seeks to achieve a balanced form of ethical deliberation that is both logically rigorous and sensitive to context, to generate normative conclusions that are practically applicable to the problem, challenge, or dilemma. Since it incorporates both philosophical and social scientific components, empirical bioethics is a field that is consistent with the use of critical realism as a research methodology. The integration of philosophical and social scientific approaches to ethics has been beset with difficulties, not least because of the irreducibly normative, rather than descriptive, nature of ethical analysis and the contested relation between fact and value. However, given that facts about states of affairs inform potential courses of action and their consequences, there is a need to overcome these difficulties and successfully integrate data with theory. Previous approaches have been formulated to overcome obstacles in combining philosophical and social scientific perspectives in bioethical analysis; however each has shortcomings. As a mature interdisciplinary approach critical realism is well suited to empirical bioethics, although it has hitherto not been widely used. Here I show how it can be applied to this kind of research and explain how it represents an improvement on previous approaches.

  11. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  12. Empirical direction in design and analysis

    CERN Document Server

    Anderson, Norman H

    2001-01-01

    The goal of Norman H. Anderson's new book is to help students develop skills of scientific inference. To accomplish this he organized the book around the ""Experimental Pyramid""--six levels that represent a hierarchy of considerations in empirical investigation--conceptual framework, phenomena, behavior, measurement, design, and statistical inference. To facilitate conceptual and empirical understanding, Anderson de-emphasizes computational formulas and null hypothesis testing. Other features include: *emphasis on visual inspection as a basic skill in experimental analysis to help student

  13. The parameters of the free ions Mn5+ and Fe6+

    International Nuclear Information System (INIS)

    Andreici, E L; Gruia, A S; Avram, N M

    2012-01-01

    The analysis of the behavior of iron-group ions in crystals, using a free-ion Hamiltonian that involves terms with only three parameters (B, C and ξ), seems to be erroneous since it is incapable of correctly predicting the levels of even a free ion. Such calculations may lead to erroneous conclusions concerning the crystal-field effects and the electron-phonon interaction. In this paper, we present the results of the most exact calculation of the parameters for free ions and the energy levels of Mn 5+ and Fe 6+ with 3d 2 configuration. In the single-configuration approximation, the effective Hamiltonian of the free ions takes into account not only the electrostatic and the spin-orbit interactions, but also the relativistic ones (spin-spin, orbit-orbit and spin-other-orbit) and the linear correlation effect. For both free ions we have calculated the semi-empirical parameters included in the interaction Hamiltonian and the energy level scheme. The values of these parameters are obtained by fitting experimental data with the minimum value of rms errors. The final results are discussed.

  14. Temporal variation and scaling of parameters for a monthly hydrologic model

    Science.gov (United States)

    Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang

    2018-03-01

    The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.

  15. Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets

    Science.gov (United States)

    Morelli, Gian Luca

    2016-05-01

    The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.

  16. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    Science.gov (United States)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  17. An empirical method for approximating stream baseflow time series using groundwater table fluctuations

    Science.gov (United States)

    Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May

    2014-11-01

    Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.

  18. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  19. Symbiotic empirical ethics: a practical methodology.

    Science.gov (United States)

    Frith, Lucy

    2012-05-01

    Like any discipline, bioethics is a developing field of academic inquiry; and recent trends in scholarship have been towards more engagement with empirical research. This 'empirical turn' has provoked extensive debate over how such 'descriptive' research carried out in the social sciences contributes to the distinctively normative aspect of bioethics. This paper will address this issue by developing a practical research methodology for the inclusion of data from social science studies into ethical deliberation. This methodology will be based on a naturalistic conception of ethical theory that sees practice as informing theory just as theory informs practice - the two are symbiotically related. From this engagement with practice, the ways that such theories need to be extended and developed can be determined. This is a practical methodology for integrating theory and practice that can be used in empirical studies, one that uses ethical theory both to explore the data and to draw normative conclusions. © 2010 Blackwell Publishing Ltd.

  20. Reframing Serial Murder Within Empirical Research.

    Science.gov (United States)

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  1. Empirical knowledge in legislation and regulation : A decision making perspective

    NARCIS (Netherlands)

    Trautmann, S.T.

    2013-01-01

    This commentary considers the pros and cons of the empirical approach to legislation from the vantage point of empirical decision making research. It focuses on methodological aspects that are typically not considered by legal scholars. It points out weaknesses in the empirical approach that are

  2. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...

  3. Investigation of excitation functions using new evaluated empirical and semi-empirical systematic for 14-15 MeV (n, t) reaction cross sections

    International Nuclear Information System (INIS)

    Tel, E.; Aydin, E. G.; Aydin, A.; Kaplan, A.

    2007-01-01

    The hybrid reactor is a combination of the fusion and fission processes. In the fusion-fission hybrid reactor, tritium self-sufficiency must be maintained for a commercial power plant. For self-sustaining (D-T) fusion driver tritium breeding ratio should be greater than 1.05. Working out the systematics of (n,t) reaction cross sections is of great importance for the definition of the excitation function character for the given reaction taking place on various nuclei at energies up to 20 MeV. In this study, we have investigated the asymmetry term effect for the (n,t) reaction cross sections at 14-15 neutron incident energy. It has been discussed the odd even effect and the pairing effect considering binding energy systematic of the nuclear shell model for the new experimental data and new cross section formulas (n,t) reactions developed by Tel et al. We have determined a different parameter groups by the classification of nuclei into even-even, even-odd and odd-even for (n,t) reactions cross sections. The obtained empirical formulas by fitting two parameter for (n,t) reactions were given. All calculated results have been compared with the experimental data. By using the new cross sections formulas (n,t) reactions the obtained results have been discussed and compared with the available experimental data

  4. Empirical Green's function analysis: Taking the next step

    Science.gov (United States)

    Hough, S.E.

    1997-01-01

    An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.

  5. Parameterization of water vapor using high-resolution GPS data and empirical models

    Science.gov (United States)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  6. Essays in empirical microeconomics

    NARCIS (Netherlands)

    Péter, A.N.

    2016-01-01

    The empirical studies in this thesis investigate various factors that could affect individuals' labor market, family formation and educational outcomes. Chapter 2 focuses on scheduling as a potential determinant of individuals' productivity. Chapter 3 looks at the role of a family factor on

  7. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  8. Empires, Exceptions, and Anglo-Saxons: Race and Rule between the British and Unites States Empires, 1880-1910. Teaching the JAH.

    Science.gov (United States)

    OAH Magazine of History, 2002

    2002-01-01

    Summarizes a teaching document that is part of "Teaching the JAH" (Journal of American History) which corresponds to the article, "Empires, Exceptions, and Anglo-Saxons: Race and Rule between the British and Unites States Empires, 1880-1910" (Paul A. Kramer). Provides the Web site address for the complete installment. (CMK)

  9. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    Science.gov (United States)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  10. Towards a single empirical correlation to predict kLa across scales and processes

    DEFF Research Database (Denmark)

    Quintanilla Hernandez, Daniela Alejandra; Gernaey, Krist; Albæk, Mads O.

    Mathematical models are increasingly used in fermentation. Nevertheless, one of the major limitations of these models is that the parameters they include are process specific, e.g. the volumetric mass transfer coefficient (kLa). Oxygen transfer was studied in order to establish a single equation...... different calculations of the average shear rate. The experimental kLa value was determined with the direct method; however, eight variations of its calculation were evaluated. Several simple correlations were fitted to the measured kLa data. The standard empirical equation was found to be best...... scales using on ‐ line viscosity measurements. A single correlation for all processes and all scales could not be established...

  11. Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice

    Science.gov (United States)

    Christie, Christina A.

    2011-01-01

    Good theory development is grounded in empirical inquiry. In the context of educational evaluation, the development of empirically grounded theory has important benefits for the field and the practitioner. In particular, a shift to empirically derived theory will assist in advancing more systematic and contextually relevant evaluation practice, as…

  12. Modeling Parameters of Reliability of Technological Processes of Hydrocarbon Pipeline Transportation

    Directory of Open Access Journals (Sweden)

    Shalay Viktor

    2016-01-01

    Full Text Available On the basis of methods of system analysis and parametric reliability theory, the mathematical modeling of processes of oil and gas equipment operation in reliability monitoring was conducted according to dispatching data. To check the quality of empiric distribution coordination , an algorithm and mathematical methods of analysis are worked out in the on-line mode in a changing operating conditions. An analysis of physical cause-and-effect relations mechanism between the key factors and changing parameters of technical systems of oil and gas facilities is made, the basic types of technical distribution parameters are defined. Evaluation of the adequacy the analyzed parameters of the type of distribution is provided by using a criterion A.Kolmogorov, as the most universal, accurate and adequate to verify the distribution of continuous processes of complex multiple-technical systems. Methods of calculation are provided for supervising by independent bodies for risk assessment and safety facilities.

  13. Carbon 13 nuclear magnetic resonance chemical shifts empiric calculations of polymers by multi linear regression and molecular modeling

    International Nuclear Information System (INIS)

    Da Silva Pinto, P.S.; Eustache, R.P.; Audenaert, M.; Bernassau, J.M.

    1996-01-01

    This work deals with carbon 13 nuclear magnetic resonance chemical shifts empiric calculations by multi linear regression and molecular modeling. The multi linear regression is indeed one way to obtain an equation able to describe the behaviour of the chemical shift for some molecules which are in the data base (rigid molecules with carbons). The methodology consists of structures describer parameters definition which can be bound to carbon 13 chemical shift known for these molecules. Then, the linear regression is used to determine the equation significant parameters. This one can be extrapolated to molecules which presents some resemblances with those of the data base. (O.L.). 20 refs., 4 figs., 1 tab

  14. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  15. Empirical research through design

    NARCIS (Netherlands)

    Keyson, D.V.; Bruns, M.

    2009-01-01

    This paper describes the empirical research through design method (ERDM), which differs from current approaches to research through design by enforcing the need for the designer, after a series of pilot prototype based studies, to a-priori develop a number of testable interaction design hypothesis

  16. Empirically sampling Universal Dependencies

    DEFF Research Database (Denmark)

    Schluter, Natalie; Agic, Zeljko

    2017-01-01

    Universal Dependencies incur a high cost in computation for unbiased system development. We propose a 100% empirically chosen small subset of UD languages for efficient parsing system development. The technique used is based on measurements of model capacity globally. We show that the diversity o...

  17. Evaluation of empirical atmospheric diffusion data

    International Nuclear Information System (INIS)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for groundlevel sources

  18. Segmentation-free empirical beam hardening correction for CT

    Energy Technology Data Exchange (ETDEWEB)

    Schüller, Sören; Sawall, Stefan [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich [Sirona Dental Systems GmbH, Fabrikstraße 31, 64625 Bensheim (Germany); Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz.de [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the

  19. Segmentation-free empirical beam hardening correction for CT.

    Science.gov (United States)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc

    2015-02-01

    The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed

  20. Who supported the Deutsche Bundesbank? An empirical investigation

    NARCIS (Netherlands)

    Maier, P; Knaap, T

    2002-01-01

    The relevance of public support for monetary policy has largely been over-looked in the empirical Central Bank literature. We have constructed a new indicator for the support of the German Bundesbank and present descriptive and empirical evidence. We find that major German interest groups were quite

  1. The relative effectiveness of empirical and physical models for simulating the dense undercurrent of pyroclastic flows under different emplacement conditions

    Science.gov (United States)

    Ogburn, Sarah E.; Calder, Eliza S

    2017-01-01

    High concentration pyroclastic density currents (PDCs) are hot avalanches of volcanic rock and gas and are among the most destructive volcanic hazards due to their speed and mobility. Mitigating the risk associated with these flows depends upon accurate forecasting of possible impacted areas, often using empirical or physical models. TITAN2D, VolcFlow, LAHARZ, and ΔH/L or energy cone models each employ different rheologies or empirical relationships and therefore differ in appropriateness of application for different types of mass flows and topographic environments. This work seeks to test different statistically- and physically-based models against a range of PDCs of different volumes, emplaced under different conditions, over different topography in order to test the relative effectiveness, operational aspects, and ultimately, the utility of each model for use in hazard assessments. The purpose of this work is not to rank models, but rather to understand the extent to which the different modeling approaches can replicate reality in certain conditions, and to explore the dynamics of PDCs themselves. In this work, these models are used to recreate the inundation areas of the dense-basal undercurrent of all 13 mapped, land-confined, Soufrière Hills Volcano dome-collapse PDCs emplaced from 1996 to 2010 to test the relative effectiveness of different computational models. Best-fit model results and their input parameters are compared with results using observation- and deposit-derived input parameters. Additional comparison is made between best-fit model results and those using empirically-derived input parameters from the FlowDat global database, which represent “forward” modeling simulations as would be completed for hazard assessment purposes. Results indicate that TITAN2D is able to reproduce inundated areas well using flux sources, although velocities are often unrealistically high. VolcFlow is also able to replicate flow runout well, but does not capture

  2. Determination of capacity of single-toggle jaw crusher, taking into account parameters of kinematics of its working mechanism

    Science.gov (United States)

    Golikov, N. S.; Timofeev, I. P.

    2018-05-01

    Efficiency increase of jaw crushers makes the foundation of rational kinematics and stiffening of the elements of the machine possible. Foundation of rational kinematics includes establishment of connection between operation mode parameters of the crusher and its technical characteristics. The main purpose of this research is just to establish such a connection. Therefore this article shows analytical procedure of getting connection between operation mode parameters of the crusher and its capacity. Theoretical, empirical and semi-empirical methods of capacity determination of a single-toggle jaw crusher are given, taking into account physico-mechanical properties of crushed material and kinematics of the working mechanism. When developing a mathematical model, the method of closed vector polygons by V. A. Zinoviev was used. The expressions obtained in the article give an opportunity to solve important scientific and technical problems, connected with finding the rational kinematics of the jaw crusher mechanism, carrying out a comparative assessment of different crushers and giving the recommendations about updating the available jaw crushers.

  3. An Empirical Analysis of the Performance of Preconditioners for SPD Systems

    KAUST Repository

    George, Thomas

    2012-08-01

    Preconditioned iterative solvers have the potential to solve very large sparse linear systems with a fraction of the memory used by direct methods. However, the effectiveness and performance of most preconditioners is not only problem dependent, but also fairly sensitive to the choice of their tunable parameters. As a result, a typical practitioner is faced with an overwhelming number of choices of solvers, preconditioners, and their parameters. The diversity of preconditioners makes it difficult to analyze them in a unified theoretical model. A systematic empirical evaluation of existing preconditioned iterative solvers can help in identifying the relative advantages of various implementations. We present the results of a comprehensive experimental study of the most popular preconditioner and iterative solver combinations for symmetric positive-definite systems. We introduce a methodology for a rigorous comparative evaluation of various preconditioners, including the use of some simple but powerful metrics. The detailed comparison of various preconditioner implementations and a state-of-the-art direct solver gives interesting insights into their relative strengths and weaknesses. We believe that these results would be useful to researchers developing preconditioners and iterative solvers as well as practitioners looking for appropriate sparse solvers for their applications. © 2012 ACM.

  4. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  5. Number of independent parameters in the potentiometric titration of humic substances.

    Science.gov (United States)

    Lenoir, Thomas; Manceau, Alain

    2010-03-16

    With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.

  6. Goodness! The empirical turn in health care ethics

    NARCIS (Netherlands)

    Willems, D.; Pols, J.

    2010-01-01

    This paper is intended to encourage scholars to submit papers for a symposium and the next special issue of Medische Antropologie which will be on empirical studies of normative questions. We describe the ‘empirical turn’ in medical ethics. Medical ethics and bioethics in general have witnessed a

  7. Learning to Read Empirical Articles in General Psychology

    Science.gov (United States)

    Sego, Sandra A.; Stuart, Anne E.

    2016-01-01

    Many students, particularly underprepared students, struggle to identify the essential information in empirical articles. We describe a set of assignments for instructing general psychology students to dissect the structure of such articles. Students in General Psychology I read empirical articles and answered a set of general, factual questions…

  8. Principles Involving Marketing Policies: An Empirical Assessment

    OpenAIRE

    JS Armstrong; Randall L. Schultz

    2005-01-01

    We examined nine marketing textbooks, published since 1927, to see if they contained useful marketing principles. Four doctoral students found 566 normative statements about pricing, product, place, or promotion in these texts. None of these stateinents were supported by empirical evidence. Four raters agreed on only twenty of these 566 statements as providing meaningful principles. Twenty marketing professors rated whether the twenty meaningful principles were correct, supported by empirical...

  9. Effects of Raindrop Shape Parameter on the Simulation of Plum Rains

    Science.gov (United States)

    Mei, H.; Zhou, L.; Li, X.; Huang, X.; Guo, W.

    2017-12-01

    The raindrop shape parameter of particle distribution is generally set as constant in a Double-moment Bulk Microphysics Scheme (DBMS) using Gama distribution function though which suggest huge differences in time and space according to observations. Based on Milbrandt 2-mon(MY) DBMS, four cases during Plum Rains season are simulated coupled with four empirical relationships between shape parameter (μr) and slope parameter of raindrop which have been concluded from observations of raindrop distributions. The analysis of model results suggest that μr have some influences on rainfall. Introducing the diagnostic formulas of μr may have some improvement on systematic biases of 24h accumulated rainfall and show some correction ability on local characteristics of rainfall distribution. Besides,the tendency to improve strong rainfall could be sensitive to μr. With the improvement of the diagnosis of μr using the empirically diagnostic formulas, μr increases generally in the middle- and lower-troposphere and decreases with the stronger rainfall. Its conclued that, the decline in raindrop water content and the increased raindrop mass-weighted average terminal velocity directly related to μr are the direct reasons of variations in the precipitation.On the other side, the environmental conditions including relative humidity and dynamical parameters are the key indirectly causes which has close relationships with the changes in cloud particles and rainfall distributions.Furthermore,the differences in the scale of improvement between the weak and heavy rainfall mainly come from the distinctions of response features about their variable fields respectively. The extent of variation in the features of cloud particles in warm clouds of heavy rainfall differs greatly from that of weak rainfall, though they share the same trend of variation. On the conditions of weak rainfall, the response of physical characteristics to μr performed consistent trends and some linear features

  10. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  11. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  12. Gravitation Theory: Empirical Status from Solar System Experiments: All observations to date are consistent with Einstein's general relativity theory of gravity.

    Science.gov (United States)

    Nordtvedt, K L

    1972-12-15

    I have reviewed the historical and contemporary experiments that guide us in choosing a post-Newtonian, relativistic gravitational theory. The foundation experiments essentially constrain gravitation theory to be a metric theory in which matter couples solely to one gravitational field, the metric field, although other cosmological gravitational fields may exist. The metric field for any metric theory can be specified (for the solar system, for our present purposes) by a series of potential terms with several parameters. A variety of experiments specify (or put limits on) the numerical values of the seven parameters in the post-Newtonian metric field, and other such experiments have been planned. The empirical results, to date, yield values of the parameters that are consistent with the predictions of Einstein's general relativity.

  13. Nonlinear soil parameter effects on dynamic embedment of offshore pipeline on soft clay

    Directory of Open Access Journals (Sweden)

    Su Young Yu

    2015-03-01

    Full Text Available In this paper, the effects of nonlinear soft clay on dynamic embedment of offshore pipeline were investigated. Seabed embedment by pipe-soil interactions has impacts on the structural boundary conditions for various subsea structures such as pipeline, riser, pile, and many other systems. A number of studies have been performed to estimate real soil behavior, but their estimation of seabed embedment has not been fully identified and there are still many uncertainties. In this regards, comparison of embedment between field survey and existing empirical models has been performed to identify uncertainties and investigate the effect of nonlinear soil parameter on dynamic embedment. From the comparison, it is found that the dynamic embedment with installation effects based on nonlinear soil model have an influence on seabed embedment. Therefore, the pipe embedment under dynamic condition by nonlinear para- meters of soil models was investigated by Dynamic Embedment Factor (DEF concept, which is defined as the ratio of the dynamic and static embedment of pipeline, in order to overcome the gap between field embedment and currently used empirical and numerical formula. Although DEF through various researches is suggested, its range is too wide and it does not consider dynamic laying effect. It is difficult to find critical parameters that are affecting to the embedment result. Therefore, the study on dynamic embedment factor by soft clay parameters of nonlinear soil model was conducted and the sensitivity analyses about parameters of nonlinear soil model were performed as well. The tendency on dynamic embedment factor was found by conducting numerical analyses using OrcaFlex software. It is found that DEF was influenced by shear strength gradient than other factors. The obtained results will be useful to understand the pipe embedment on soft clay seabed for applying offshore pipeline designs such as on-bottom stability and free span analyses.

  14. Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes

    Science.gov (United States)

    Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.

    2018-03-01

    Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.

  15. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    International Nuclear Information System (INIS)

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes

  16. A Parameter Selection Method for Wind Turbine Health Management through SCADA Data

    Directory of Open Access Journals (Sweden)

    Mian Du

    2017-02-01

    Full Text Available Wind turbine anomaly or failure detection using machine learning techniques through supervisory control and data acquisition (SCADA system is drawing wide attention from academic and industry While parameter selection is important for modelling a wind turbine’s condition, only a few papers have been published focusing on this issue and in those papers interconnections among sub-components in a wind turbine are used to address this problem. However, merely the interconnections for decision making sometimes is too general to provide a parameter list considering the differences of each SCADA dataset. In this paper, a method is proposed to provide more detailed suggestions on parameter selection based on mutual information. First, the copula is proven to be capable of simplifying the estimation of mutual information. Then an empirical copulabased mutual information estimation method (ECMI is introduced for application. After that, a real SCADA dataset is adopted to test the method, and the results show the effectiveness of the ECMI in providing parameter selection suggestions when physical knowledge is not accurate enough.

  17. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Camara Vincent A. R.

    1998-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results. It is shown that empirical Bayes reliability functions are in general sensitive to the choice of the loss function, and that the squared error loss does not always yield the best empirical Bayes reliability estimate.

  18. An empirical investigation of spatial differentiation and price floor regulations in retail markets for gasoline

    Science.gov (United States)

    Houde, Jean-Francois

    In the first essay of this dissertation, I study an empirical model of spatial competition. The main feature of my approach is to formally specify commuting paths as the "locations" of consumers in a Hotelling-type model of spatial competition. The main consequence of this location assumption is that the substitution patterns between stations depend in an intuitive way on the structure of the road network and the direction of traffic flows. The demand-side of the model is estimated by combining a model of traffic allocation with econometric techniques used to estimate models of demand for differentiated products (Berry, Levinsohn and Pakes (1995)). The estimated parameters are then used to evaluate the importance of commuting patterns in explaining the distribution of gasoline sales, and compare the economic predictions of the model with the standard home-location model. In the second and third essays, I examine empirically the effect of a price floor regulation on the dynamic and static equilibrium outcomes of the gasoline retail industry. In particular, in the second essay I study empirically the dynamic entry and exit decisions of gasoline stations, and measure the impact of a price floor on the continuation values of staying in the industry. In the third essay, I develop and estimate a static model of quantity competition subject to a price floor regulation. Both models are estimated using a rich panel dataset on the Quebec gasoline retail market before and after the implementation of a price floor regulation.

  19. Sources of Currency Crisis: An Empirical Analysis

    OpenAIRE

    Weber, Axel A.

    1997-01-01

    Two types of currency crisis models coexist in the literature: first generation models view speculative attacks as being caused by economic fundamentals which are inconsistent with a given parity. Second generation models claim self-fulfilling speculation as the main source of a currency crisis. Recent empirical research in international macroeconomics has attempted to distinguish between the sources of currency crises. This paper adds to this literature by proposing a new empirical approach ...

  20. Influence of Weaving Loom Setting Parameters on Changes of Woven Fabric Structure and Mechanical Properties

    Directory of Open Access Journals (Sweden)

    Aušra ADOMAITIENĖ

    2011-11-01

    Full Text Available During the manufacturing of fabric of different raw material there was noticed, that after removing the fabric from weaving loom and after stabilization of fabric structure, the changes of parameters of fabric structure are not regular. During this investigation it was analysed, how weaving loom technological parameters (heald cross moment and initial tension of warp should be chosen and how to predict the changes of fabric structure parameters and its mechanical properties. The dependencies of changes of half-wool fabric structure parameters (weft setting, fabric thickness and projections of fabric cross-section and mechanical properties (breaking force, elongation at break, static friction force and static friction coefficient on weaving loom setting parameters (heald cross moment and initial warp tension were analysed. The orthogonal Box plan of two factors was used, the 3-D dependencies were drawn, and empirical equations of these dependencies were established.http://dx.doi.org/10.5755/j01.ms.17.4.780

  1. Evaluation of empirical atmospheric diffusion data

    Energy Technology Data Exchange (ETDEWEB)

    Horst, T.W.; Doran, J.C.; Nickola, P.W.

    1979-10-01

    A study has been made of atmospheric diffusion over level, homogeneous terrain of contaminants released from non-buoyant point sources up to 100 m in height. Current theories of diffusion are compared to empirical diffusion data, and specific dispersion estimation techniques are recommended which can be implemented with the on-site meteorological instrumentation required by the Nuclear Regulatory Commission. A comparison of both the recommended diffusion model and the NRC diffusion model with the empirical data demonstrates that the predictions of the recommended model have both smaller scatter and less bias, particularly for ground-level sources.

  2. Growth-corruption-health triaca and environmental degradation: empirical evidence from Indonesia, Malaysia, and Thailand.

    Science.gov (United States)

    Azam, Muhammad; Khan, Abdul Qayyum

    2017-07-01

    This study examines the impact of economic growth, corruption, health, and poverty on environmental degradation for three countries from ASEAN, namely Indonesia, Malaysia, and Thailand using annual data over the period of 1994-2014. The relationship between environmental degradation (pollution) by carbon dioxide (CO 2 ) emissions and economic growth is examined along with some other variables, namely health expenditure, poverty, agriculture value added growth, industrial value added growth, and corruption. The ordinary least squares (OLS) method is applied as an analytical technique for parameter estimation. The empirical results reveal that almost all variables are statistically significant at the 5% level of significance, whereby test rejects the null hypotheses of non-cointegration, indicating that all variables play an important role in affecting the environment across countries. Empirical results also indicate that economic growth has significant positive impact, while health expenditures show significantly negative impact on the environment. Corruption has significant positive effect on environment in the case of Malaysia; while in the case of Indonesia and Thailand, it has insignificant results. However, for the individual analysis across countries, the regression estimate suggests that economic growth has a significant positive relationship with environment for Indonesia, while it is found insignificantly negative and positive in the case of Malaysia and Thailand, respectively, during the period under the study. Empirical findings of the study suggest that policy-makers require to make technological-friendly environment sequentially to surmount unregulated pollution, steady population transfers from rural areas to urban areas are also important, and poverty alleviation and better health provision can also help to improve the environment.

  3. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  4. Data on empirically estimated corporate survival rate in Russia.

    Science.gov (United States)

    Kuzmin, Evgeny A

    2018-02-01

    The article presents data on the corporate survival rate in Russia in 1991-2014. The empirical survey was based on a random sample with the average number of non-repeated observations (number of companies) for the survey each year equal to 75,958 (24,236 minimum and 126,953 maximum). The actual limiting mean error ∆ p was 2.24% with 99% integrity. The survey methodology was based on a cross joining of various formal periods in the corporate life cycles (legal and business), which makes it possible to talk about a conventionally active time life of companies' existence with a number of assumptions. The empirical survey values were grouped by Russian regions and industries according to the classifier and consolidated into a single database for analysing the corporate life cycle and their survival rate and searching for deviation dependencies in calculated parameters. Preliminary and incomplete figures were available in the paper entitled "Survival Rate and Lifecycle in Terms of Uncertainty: Review of Companies from Russia and Eastern Europe" (Kuzmin and Guseva, 2016) [3]. The further survey led to filtered processed data with clerical errors excluded. These particular values are available in the article. The survey intended to fill a fact-based gap in various fundamental surveys that involved matters of the corporate life cycle in Russia within the insufficient statistical framework. The data are of interest for an analysis of Russian entrepreneurship, assessment of the market development and incorporation risks in the current business environment. A further heuristic potential is achievable through an ability of forecasted changes in business demography and model building based on the representative data set.

  5. Agency Theory and Franchising: Some Empirical Results

    OpenAIRE

    Francine Lafontaine

    1992-01-01

    This article provides an empirical assessment of various agency-theoretic explanations for franchising, including risk sharing, one-sided moral hazard, and two-sided moral hazard. The empirical models use proxies for factors such as risk, moral hazard, and franchisors' need for capital to explain both franchisors' decisions about the terms of their contracts (royalty rates and up-front franchise fees) and the extent to which they use franchising. In this article, I exploit several new sources...

  6. Gun Laws and Crime: An Empirical Assessment

    OpenAIRE

    Matti Viren

    2012-01-01

    This paper deals with the effect of gun laws on crime. Several empirical analyses are carried to investigate the relationship between five different crime rates and alternative law variables. The tests are based on cross-section data from US sates. Three different law variables are used in the analysis, together with a set of control variables for income, poverty, unemployment and ethnic background of the population. Empirical analysis does not lend support to the notion that crime laws would...

  7. Empirical formulae for excess noise factor with dead space for single carrier multiplication

    KAUST Repository

    Dehwah, Ahmad H.

    2011-09-01

    In this letter, two empirical equations are presented for the calculation of the excess noise factor of an avalanche photodiode for single carrier multiplication including the dead space effect. The first is an equation for calculating the excess noise factor when the multiplication approaches infinity as a function of parameters that describe the degree of the dead space effect. The second equation can be used to find the minimum value of the excess noise factor for any multiplication when the dead space effect is completely dominant, the so called "deterministic" limit. This agrees with the theoretically known equation for multiplications less than or equal to two. © 2011 World Scientific Publishing Company.

  8. Empirical formulae for excess noise factor with dead space for single carrier multiplication

    KAUST Repository

    Dehwah, Ahmad H.; Ajia, Idris A.; Marsland, John S.

    2011-01-01

    In this letter, two empirical equations are presented for the calculation of the excess noise factor of an avalanche photodiode for single carrier multiplication including the dead space effect. The first is an equation for calculating the excess noise factor when the multiplication approaches infinity as a function of parameters that describe the degree of the dead space effect. The second equation can be used to find the minimum value of the excess noise factor for any multiplication when the dead space effect is completely dominant, the so called "deterministic" limit. This agrees with the theoretically known equation for multiplications less than or equal to two. © 2011 World Scientific Publishing Company.

  9. Impact parameter analysis of coherent and incoherent pion production on nuclei by 11.7GeV/c π+

    International Nuclear Information System (INIS)

    Arnold, R.; Barshay, S.; Riester, J.L.

    1976-01-01

    Using the complete momentum measurements for 2, 3, 4 and 5 pion final states. The impact parameter structure, with the following principal results has been studied. Evidence is presented for an empirical method that can help in the separation of coherent events on nuclei. Incoherent nuclear production exhibits lower-bound impact parameters which decrease systematically with an increasing number of produced pions. The experimental b-distributions can be very well fitted by a single simple scaled functional form, dsigmasub(N)(b)/d 2 bvector proportional to F(N/f(b)), this N-distribution yields a ratio (dispersion/average dispersion) of about 0.35 at any impact parameter [fr

  10. Energy transfer by magnetopause reconnection and the substorm parameter epsilon

    International Nuclear Information System (INIS)

    Gonzalez-Alarcon, W.D.; Gonzalez, A.L.C. de.

    1983-01-01

    An expression for the magnetopause reconnection power based on the dawn-dusk component of the reconnection electric field, that reduces to the substorm parameter epsilon for the limit that involves equal geomagnetic (B sub(G)) and magnetosheath (B sub(M)) magnetic field amplitudes at the magnetopause, is contrasted with the expression based on the whole reconnection electric field vector obtained by Gonzalez. The correlation examples of this report show that this (more general) expression for the reconnection power seems to correlate with the empirical dissipation parameter U sub(T) from Akasofu, with slightly better correlation coefficients than those obtained from similar correlations between the parameter epsilon and U sub(T). Thus, these (better) correlations show up for the more familiar values of the ratio B sub(G) / B sub(M) > 1. Nevertheless, the (expected) relatively small difference that seems to exist between these correlation coefficients suggests that, for practical purposes, the parameter epsilon could be used as well (instead of the more general expression) in similar correlation studies due to its impler format. On the other hand, studies that refer mainly to the difference in the magnitudes of epsilon and of the more general expression are expected to give results with less negligible differences. (Author) [pt

  11. Empirical Scaling Relations of Source Parameters For The Earthquake Swarm 2000 At Novy Kostel (vogtland/nw-bohemia)

    Science.gov (United States)

    Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.

    Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.

  12. Economic growth and emissions reconsidering the empirical basis of environmental Kuznets curves

    International Nuclear Information System (INIS)

    De Bruyn, S.M.; Van den Bergh, J.C.J.M.; Opschoor, J.B.

    1998-01-01

    Recent empirical research indicates that certain types of emissions follow an inverted-U or environmental Kuznets curve (EKC) as income grows. This regularity has been interpreted as a possible de-linking of economic growth and patterns of certain pollutants for developed economies. In this paper the empirical basis of this result is investigated, by considering some statistical particularities of the various EKC studies performed. It is argued that the inverted-U relationship between income and emissions estimated from panel data need not hold for specific individual countries over time. Based on insights from 'intensity-of-use' analysis in resource economics an alternative growth model is specified and estimated for three types of emissions (CO 2 , NO x and SO 2 ) in four countries (Netherlands, UK, USA and Western Germany). It is found that the time patterns of these emissions correlate positively with economic growth and that emission reductions may have been achieved as a result of structural and technological changes in the economy. 'Sustainable growth' is defined as the rate of economic growth that does not lead to growth in emissions. Its rate is calculated for each type of emission and country, based on estimated parameter values. The resulting indicators reflect a balance between the positive influence of growth and negative influence of structural change and technological progress on emission levels

  13. Electronic structure of thin films by the self-consistent numerical-basis-set linear combination of atomic orbitals method: Ni(001)

    International Nuclear Information System (INIS)

    Wang, C.S.; Freeman, A.J.

    1979-01-01

    We present the self-consistent numerical-basis-set linear combination of atomic orbitals (LCAO) discrete variational method for treating the electronic structure of thin films. As in the case of bulk solids, this method provides for thin films accurate solutions of the one-particle local density equations with a non-muffin-tin potential. Hamiltonian and overlap matrix elements are evaluated accurately by means of a three-dimensional numerical Diophantine integration scheme. Application of this method is made to the self-consistent solution of one-, three-, and five-layer Ni(001) unsupported films. The LCAO Bloch basis set consists of valence orbitals (3d, 4s, and 4p states for transition metals) orthogonalized to the frozen-core wave functions. The self-consistent potential is obtained iteratively within the superposition of overlapping spherical atomic charge density model with the atomic configurations treated as adjustable parameters. Thus the crystal Coulomb potential is constructed as a superposition of overlapping spherically symmetric atomic potentials and, correspondingly, the local density Kohn-Sham (α = 2/3) potential is determined from a superposition of atomic charge densities. At each iteration in the self-consistency procedure, the crystal charge density is evaluated using a sampling of 15 independent k points in (1/8)th of the irreducible two-dimensional Brillouin zone. The total density of states (DOS) and projected local DOS (by layer plane) are calculated using an analytic linear energy triangle method (presented as an Appendix) generalized from the tetrahedron scheme for bulk systems. Distinct differences are obtained between the surface and central plane local DOS. The central plane DOS is found to converge rapidly to the DOS of bulk paramagnetic Ni obtained by Wang and Callaway. Only a very small surplus charge (0.03 electron/atom) is found on the surface planes, in agreement with jellium model calculations

  14. A predictive thermal dynamic model for parameter generation in the laser assisted direct write process

    International Nuclear Information System (INIS)

    Shang Shuo; Fearon, Eamonn; Wellburn, Dan; Sato, Taku; Edwardson, Stuart; Dearden, G; Watkins, K G

    2011-01-01

    The laser assisted direct write (LADW) method can be used to generate electrical circuitry on a substrate by depositing metallic ink and curing the ink thermally by a laser. Laser curing has emerged over recent years as a novel yet efficient alternative to oven curing. This method can be used in situ, over complicated 3D contours of large parts (e.g. aircraft wings) and selectively cure over heat sensitive substrates, with little or no thermal damage. In previous studies, empirical methods have been used to generate processing windows for this technique, relating to the several interdependent processing parameters on which the curing quality and efficiency strongly depend. Incorrect parameters can result in a track that is cured in some areas and uncured in others, or in damaged substrates. This paper addresses the strong need for a quantitative model which can systematically output the processing conditions for a given combination of ink, substrate and laser source; transforming the LADW technique from a purely empirical approach, to a simple, repeatable, mathematically sound, efficient and predictable process. The method comprises a novel and generic finite element model (FEM) that for the first time predicts the evolution of the thermal profile of the ink track during laser curing and thus generates a parametric map which indicates the most suitable combination of parameters for process optimization. Experimental data are compared with simulation results to verify the accuracy of the model.

  15. Quantitative predictions from competition theory with incomplete information on model parameters tested against experiments across diverse taxa

    OpenAIRE

    Fort, Hugo

    2017-01-01

    We derive an analytical approximation for making quantitative predictions for ecological communities as a function of the mean intensity of the inter-specific competition and the species richness. This method, with only a fraction of the model parameters (carrying capacities and competition coefficients), is able to predict accurately empirical measurements covering a wide variety of taxa (algae, plants, protozoa).

  16. Semi-empirical proton binding constants for natural organic matter

    Science.gov (United States)

    Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain

    2010-03-01

    Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed

  17. Managerial Career Patterns: A Review of the Empirical Evidence

    NARCIS (Netherlands)

    Vinkenburg, C.J.; Weber, T.

    2012-01-01

    Despite the ubiquitous presence of the term "career patterns" in the discourse about careers, the existing empirical evidence on (managerial) career patterns is rather limited. From this literature review of 33 published empirical studies of managerial and similar professional career patterns found

  18. Propagation of a channelized debris-flow: experimental investigation and parameters identification for numerical modelling

    Science.gov (United States)

    Termini, Donatella

    2013-04-01

    Recent catastrophic events due to intense rainfalls have mobilized large amount of sediments causing extensive damages in vast areas. These events have highlighted how debris-flows runout estimations are of crucial importance to delineate the potentially hazardous areas and to make reliable assessment of the level of risk of the territory. Especially in recent years, several researches have been conducted in order to define predicitive models. But, existing runout estimation methods need input parameters that can be difficult to estimate. Recent experimental researches have also allowed the assessment of the physics of the debris flows. But, the major part of the experimental studies analyze the basic kinematic conditions which determine the phenomenon evolution. Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials (DICAM) - University of Palermo (Italy). The experiments, carried out in a laboratory flume appositely constructed, were planned in order to evaluate the influence of different geometrical parameters (such as the slope and the geometrical characteristics of the confluences to the main channel) on the propagation phenomenon of the debris flow and its deposition. Thus, the aim of the present work is to give a contribution to defining input parameters in runout estimation by numerical modeling. The propagation phenomenon is analyzed for different concentrations of solid materials. Particular attention is devoted to the identification of the stopping distance of the debris flow and of the involved parameters (volume, angle of depositions, type of material) in the empirical predictive equations available in literature (Rickenmanm, 1999; Bethurst et al. 1997). Bethurst J.C., Burton A., Ward T.J. 1997. Debris flow run-out and landslide sediment delivery model tests. Journal of hydraulic Engineering, ASCE, 123(5), 419-429 Rickenmann D. 1999. Empirical relationships

  19. Empirical isotropic chemical shift surfaces

    International Nuclear Information System (INIS)

    Czinki, Eszter; Csaszar, Attila G.

    2007-01-01

    A list of proteins is given for which spatial structures, with a resolution better than 2.5 A, are known from entries in the Protein Data Bank (PDB) and isotropic chemical shift (ICS) values are known from the RefDB database related to the Biological Magnetic Resonance Bank (BMRB) database. The structures chosen provide, with unknown uncertainties, dihedral angles φ and ψ characterizing the backbone structure of the residues. The joint use of experimental ICSs of the same residues within the proteins, again with mostly unknown uncertainties, and ab initio ICS(φ,ψ) surfaces obtained for the model peptides For-(l-Ala) n -NH 2 , with n = 1, 3, and 5, resulted in so-called empirical ICS(φ,ψ) surfaces for all major nuclei of the 20 naturally occurring α-amino acids. Out of the many empirical surfaces determined, it is the 13C α ICS(φ,ψ) surface which seems to be most promising for identifying major secondary structure types, α-helix, β-strand, left-handed helix (α D ), and polyproline-II. Detailed tests suggest that Ala is a good model for many naturally occurring α-amino acids. Two-dimensional empirical 13C α - 1 H α ICS(φ,ψ) correlation plots, obtained so far only from computations on small peptide models, suggest the utility of the experimental information contained therein and thus they should provide useful constraints for structure determinations of proteins

  20. Empirically Testing Thematic Analysis (ETTA)

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Tingleff, Elllen B.

    2015-01-01

    Text analysis is not a question of a right or wrong way to go about it, but a question of different traditions. These tend to not only give answers to how to conduct an analysis, but also to provide the answer as to why it is conducted in the way that it is. The problem however may be that the li...... for themselves. The advantage of utilizing the presented analytic approach is argued to be the integral empirical testing, which should assure systematic development, interpretation and analysis of the source textual material....... between tradition and tool is unclear. The main objective of this article is therefore to present Empirical Testing Thematic Analysis, a step by step approach to thematic text analysis; discussing strengths and weaknesses, so that others might assess its potential as an approach that they might utilize/develop...

  1. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    Science.gov (United States)

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  2. Guidelines for using empirical studies in software engineering education

    Directory of Open Access Journals (Sweden)

    Fabian Fagerholm

    2017-09-01

    Full Text Available Software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. Educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. Industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. Real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. A way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. In this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. We give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. Furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. Having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life.

  3. Empirical molecular-dynamics study of diffusion in liquid semiconductors

    Science.gov (United States)

    Yu, W.; Wang, Z. Q.; Stroud, D.

    1996-11-01

    We report the results of an extensive molecular-dynamics study of diffusion in liquid Si and Ge (l-Si and l-Ge) and of impurities in l-Ge, using empirical Stillinger-Weber (SW) potentials with several choices of parameters. We use a numerical algorithm in which the three-body part of the SW potential is decomposed into products of two-body potentials, thereby permitting the study of large systems. One choice of SW parameters agrees very well with the observed l-Ge structure factors. The diffusion coefficients D(T) at melting are found to be approximately 6.4×10-5 cm2/s for l-Si, in good agreement with previous calculations, and about 4.2×10-5 and 4.6×10-5 cm2/s for two models of l-Ge. In all cases, D(T) can be fitted to an activated temperature dependence, with activation energies Ed of about 0.42 eV for l-Si, and 0.32 or 0.26 eV for two models of l-Ge, as calculated from either the Einstein relation or from a Green-Kubo-type integration of the velocity autocorrelation function. D(T) for Si impurities in l-Ge is found to be very similar to the self-diffusion coefficient of l-Ge. We briefly discuss possible reasons why the SW potentials give D(T)'s substantially lower than ab initio predictions.

  4. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  5. Electron momentum density and Compton profile by a semi-empirical approach

    Science.gov (United States)

    Aguiar, Julio C.; Mitnik, Darío; Di Rocco, Héctor O.

    2015-08-01

    Here we propose a semi-empirical approach to describe with good accuracy the electron momentum densities and Compton profiles for a wide range of pure crystalline metals. In the present approach, we use an experimental Compton profile to fit an analytical expression for the momentum densities of the valence electrons. This expression is similar to a Fermi-Dirac distribution function with two parameters, one of which coincides with the ground state kinetic energy of the free-electron gas and the other resembles the electron-electron interaction energy. In the proposed scheme conduction electrons are neither completely free nor completely bound to the atomic nucleus. This procedure allows us to include correlation effects. We tested the approach for all metals with Z=3-50 and showed the results for three representative elements: Li, Be and Al from high-resolution experiments.

  6. Pluvials, droughts, the Mongol Empire, and modern Mongolia

    Science.gov (United States)

    Pederson, Neil; Hessl, Amy E.; Baatarbileg, Nachin; Anchukaitis, Kevin J.; Di Cosmo, Nicola

    2014-01-01

    Although many studies have associated the demise of complex societies with deteriorating climate, few have investigated the connection between an ameliorating environment, surplus resources, energy, and the rise of empires. The 13th-century Mongol Empire was the largest contiguous land empire in world history. Although drought has been proposed as one factor that spurred these conquests, no high-resolution moisture data are available during the rapid development of the Mongol Empire. Here we present a 1,112-y tree-ring reconstruction of warm-season water balance derived from Siberian pine (Pinus sibirica) trees in central Mongolia. Our reconstruction accounts for 56% of the variability in the regional water balance and is significantly correlated with steppe productivity across central Mongolia. In combination with a gridded temperature reconstruction, our results indicate that the regional climate during the conquests of Chinggis Khan’s (Genghis Khan’s) 13th-century Mongol Empire was warm and persistently wet. This period, characterized by 15 consecutive years of above-average moisture in central Mongolia and coinciding with the rise of Chinggis Khan, is unprecedented over the last 1,112 y. We propose that these climate conditions promoted high grassland productivity and favored the formation of Mongol political and military power. Tree-ring and meteorological data also suggest that the early 21st-century drought in central Mongolia was the hottest drought in the last 1,112 y, consistent with projections of warming over Inner Asia. Future warming may overwhelm increases in precipitation leading to similar heat droughts, with potentially severe consequences for modern Mongolia. PMID:24616521

  7. Pluvials, droughts, the Mongol Empire, and modern Mongolia.

    Science.gov (United States)

    Pederson, Neil; Hessl, Amy E; Baatarbileg, Nachin; Anchukaitis, Kevin J; Di Cosmo, Nicola

    2014-03-25

    Although many studies have associated the demise of complex societies with deteriorating climate, few have investigated the connection between an ameliorating environment, surplus resources, energy, and the rise of empires. The 13th-century Mongol Empire was the largest contiguous land empire in world history. Although drought has been proposed as one factor that spurred these conquests, no high-resolution moisture data are available during the rapid development of the Mongol Empire. Here we present a 1,112-y tree-ring reconstruction of warm-season water balance derived from Siberian pine (Pinus sibirica) trees in central Mongolia. Our reconstruction accounts for 56% of the variability in the regional water balance and is significantly correlated with steppe productivity across central Mongolia. In combination with a gridded temperature reconstruction, our results indicate that the regional climate during the conquests of Chinggis Khan's (Genghis Khan's) 13th-century Mongol Empire was warm and persistently wet. This period, characterized by 15 consecutive years of above-average moisture in central Mongolia and coinciding with the rise of Chinggis Khan, is unprecedented over the last 1,112 y. We propose that these climate conditions promoted high grassland productivity and favored the formation of Mongol political and military power. Tree-ring and meteorological data also suggest that the early 21st-century drought in central Mongolia was the hottest drought in the last 1,112 y, consistent with projections of warming over Inner Asia. Future warming may overwhelm increases in precipitation leading to similar heat droughts, with potentially severe consequences for modern Mongolia.

  8. Pluvials, droughts, the Mongol Empire, and modern Mongolia

    Science.gov (United States)

    Pederson, Neil; Hessl, Amy E.; Baatarbileg, Nachin; Anchukaitis, Kevin J.; Di Cosmo, Nicola

    2014-03-01

    Although many studies have associated the demise of complex societies with deteriorating climate, few have investigated the connection between an ameliorating environment, surplus resources, energy, and the rise of empires. The 13th-century Mongol Empire was the largest contiguous land empire in world history. Although drought has been proposed as one factor that spurred these conquests, no high-resolution moisture data are available during the rapid development of the Mongol Empire. Here we present a 1,112-y tree-ring reconstruction of warm-season water balance derived from Siberian pine (Pinus sibirica) trees in central Mongolia. Our reconstruction accounts for 56% of the variability in the regional water balance and is significantly correlated with steppe productivity across central Mongolia. In combination with a gridded temperature reconstruction, our results indicate that the regional climate during the conquests of Chinggis Khan's (Genghis Khan's) 13th-century Mongol Empire was warm and persistently wet. This period, characterized by 15 consecutive years of above-average moisture in central Mongolia and coinciding with the rise of Chinggis Khan, is unprecedented over the last 1,112 y. We propose that these climate conditions promoted high grassland productivity and favored the formation of Mongol political and military power. Tree-ring and meteorological data also suggest that the early 21st-century drought in central Mongolia was the hottest drought in the last 1,112 y, consistent with projections of warming over Inner Asia. Future warming may overwhelm increases in precipitation leading to similar heat droughts, with potentially severe consequences for modern Mongolia.

  9. Investigation of coulomb and pairing effects using new developed empirical formulas for proton-induced reaction cross sections

    International Nuclear Information System (INIS)

    Tel, E.; Aydin, E. G.; Aydin, A.; Kaplan, A.; Boeluekdemir, M. H.; Okuducu, S.

    2010-01-01

    We have investigated Coulomb and pairing effects by using new empirical formulas including the new coefficients for (p, α) at 17.9 MeV, (p, np) at 22.3 MeV, and (p, nα) at 24.8 and 28.5 MeV energies. A new formula is obtained by adjusting Levkovskii's original asymmetry parameter formula and also Tel et al. formula for proton-induced reactions. The new coefficients by using least-squares fitting method for the reactions are determined. In addition, the findings of the present study are compared with the available experimental data.

  10. Poisson and Gaussian approximation of weighted local empirical processes

    NARCIS (Netherlands)

    Einmahl, J.H.J.

    1995-01-01

    We consider the local empirical process indexed by sets, a greatly generalized version of the well-studied uniform tail empirical process. We show that the weak limit of weighted versions of this process is Poisson under certain conditions, whereas it is Gaussian in other situations. Our main

  11. Empiric potassium supplementation and increased survival in users of loop diuretics.

    Directory of Open Access Journals (Sweden)

    Charles E Leonard

    Full Text Available The effectiveness of the clinical strategy of empiric potassium supplementation in reducing the frequency of adverse clinical outcomes in patients receiving loop diuretics is unknown. We sought to examine the association between empiric potassium supplementation and 1 all-cause death and 2 outpatient-originating sudden cardiac death (SD and ventricular arrhythmia (VA among new starters of loop diuretics, stratified on initial loop diuretic dose.We conducted a one-to-one propensity score-matched cohort study using 1999-2007 US Medicaid claims from five states. Empiric potassium supplementation was defined as a potassium prescription on the day of or the day after the initial loop diuretic prescription. Death, the primary outcome, was ascertained from the Social Security Administration Death Master File; SD/VA, the secondary outcome, from incident, first-listed emergency department or principal inpatient SD/VA discharge diagnoses (positive predictive value = 85%.We identified 654,060 persons who met eligibility criteria and initiated therapy with a loop diuretic, 27% of whom received empiric potassium supplementation (N = 179,436 and 73% of whom did not (N = 474,624. The matched hazard ratio for empiric potassium supplementation was 0.93 (95% confidence interval, 0.89-0.98, p = 0.003 for all-cause death. Stratifying on initial furosemide dose, hazard ratios for empiric potassium supplementation with furosemide < 40 and ≥ 40 milligrams/day were 0.93 (0.86-1.00, p = 0.050 and 0.84 (0.79-0.89, p < 0.0001. The matched hazard ratio for empiric potassium supplementation was 1.02 (0.83-1.24, p = 0.879 for SD/VA.Empiric potassium supplementation upon initiation of a loop diuretic appears to be associated with improved survival, with a greater apparent benefit seen with higher diuretic dose. If confirmed, these findings support the use of empiric potassium supplementation upon initiation of a loop diuretic.

  12. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  13. Estimation of CN Parameter for Small Agricultural Watersheds Using Asymptotic Functions

    Directory of Open Access Journals (Sweden)

    Tomasz Kowalik

    2015-03-01

    Full Text Available This paper investigates a possibility of using asymptotic functions to determine the value of curve number (CN parameter as a function of rainfall in small agricultural watersheds. It also compares the actually calculated CN with its values provided in the Soil Conservation Service (SCS National Engineering Handbook Section 4: Hydrology (NEH-4 and Technical Release 20 (TR-20. The analysis showed that empirical CN values presented in the National Engineering Handbook tables differed from the actually observed values. Calculations revealed a strong correlation between the observed CN and precipitation (P. In three of the analyzed watersheds, a typical pattern of the observed CN stabilization during abundant precipitation was perceived. It was found that Model 2, based on a kinetics equation, most effectively described the P-CN relationship. In most cases, the observed CN in the investigated watersheds was similar to the empirical CN, corresponding to average moisture conditions set out by NEH-4. Model 2 also provided the greatest stability of CN at 90% sampled event rainfall.

  14. Towards an Automatic Parameter-Tuning Framework for Cost Optimization on Video Encoding Cloud

    Directory of Open Access Journals (Sweden)

    Xiaowei Li

    2012-01-01

    Full Text Available The emergence of cloud encoding services facilitates many content owners, such as the online video vendors, to transcode their digital videos without infrastructure setup. Such service provider charges the customers only based on their resource consumption. For both the service provider and customers, lowering the resource consumption while maintaining the quality is valuable and desirable. Thus, to choose a cost-effective encoding parameter, configuration is essential and challenging due to the tradeoff between bitrate, encoding speed, and resulting quality. In this paper, we explore the feasibility of an automatic parameter-tuning framework, based on which the above objective can be achieved. We introduce a simple service model, which combines the bitrate and encoding speed into a single value: encoding cost. Then, we conduct an empirical study to examine the relationship between the encoding cost and various parameter settings. Our experiment is based on the one-pass Constant Rate Factor method in x264, which can achieve relatively stable perceptive quality, and we vary each parameter we choose to observe how the encoding cost changes. The experiment results show that the tested parameters can be independently tuned to minimize the encoding cost, which makes the automatic parameter-tuning framework feasible and promising for optimizing the cost on video encoding cloud.

  15. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  16. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings

    Directory of Open Access Journals (Sweden)

    Livio Bioglio

    2016-11-01

    Full Text Available Abstract Background The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. Methods We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Results Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. Conclusions An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and

  17. Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.

    Science.gov (United States)

    Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria

    2016-11-14

    The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of

  18. Empirical Analysis for the Heat Exchange Effectiveness of a Thermoelectric Liquid Cooling and Heating Unit

    Directory of Open Access Journals (Sweden)

    Hansol Lim

    2018-03-01

    Full Text Available This study aims to estimate the performance of thermoelectric module (TEM heat pump for simultaneous liquid cooling and heating and propose empirical models for predicting the heat exchange effectiveness. The experiments were conducted to investigate and collect the performance data of TEM heat pump where the working fluid was water. A total of 57 sets of experimental data were statistically analyzed to estimate the effects of each independent variable on the heat exchange effectiveness using analysis of variance (ANOVA. To develop the empirical model, the six design parameters were measured: the number of transfer units (NTU of the heat exchangers (i.e., water blocks, the inlet water temperatures and temperatures of water blocks at the cold and hot sides of the TEM. As a result, two polynomial equations predicting heat exchange effectiveness at the cold and hot sides of the TEM heat pump were derived as a function of the six selected design parameters. Also, the proposed models and theoretical model of conventional condenser and evaporator for heat exchange effectiveness were compared with the additional measurement data to validate the reliability of the proposed models. Consequently, two conclusions have been made: (1 the possibility of using the TEM heat pump for simultaneous cooling and heating was examined with the maximum temperature difference of 30 °C between cold and hot side of TEM, and (2 it is revealed that TEM heat pump has difference with the conventional evaporator and condenser from the comparison results between the proposed models and theoretical model due to the heat conduction and Joule effect in TEM.

  19. Managerial Career Patterns: A Review of the Empirical Evidence

    Science.gov (United States)

    Vinkenburg, Claartje J.; Weber, Torsten

    2012-01-01

    Despite the ubiquitous presence of the term "career patterns" in the discourse about careers, the existing empirical evidence on (managerial) career patterns is rather limited. From this literature review of 33 published empirical studies of managerial and similar professional career patterns found in electronic bibliographic databases, it is…

  20. Time-frequency analysis : mathematical analysis of the empirical mode decomposition.

    Science.gov (United States)

    2009-01-01

    Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...

  1. Updating an empirical analysis on the proton’s central opacity and asymptotia

    International Nuclear Information System (INIS)

    Fagundes, D A; Menon, M J; Silva, P V R G

    2016-01-01

    We present an updated empirical analysis on the ratio of the elastic (integrated) to the total cross section in the c.m. energy interval from 5 GeV to 8 TeV. As in a previous work, we use a suitable analytical parametrization for that ratio (depending on only four free fit parameters) and investigate three asymptotic scenarios: either the black disk limit or scenarios above or below that limit. The dataset includes now the datum at 7 TeV, recently reported by the ATLAS Collaboration. Our analysis favors, once more, a scenario below the black disk, providing an asymptotic ratio consistent with the rational value 1/3, namely a gray disk limit. Upper bounds for the ratio of the diffractive (dissociative) to the inelastic cross section are also presented. (paper)

  2. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  3. Empirical Bayes conditional independence graphs for regulatory network recovery

    Science.gov (United States)

    Mahdi, Rami; Madduri, Abishek S.; Wang, Guoqing; Strulovici-Barel, Yael; Salit, Jacqueline; Hackett, Neil R.; Crystal, Ronald G.; Mezey, Jason G.

    2012-01-01

    Motivation: Computational inference methods that make use of graphical models to extract regulatory networks from gene expression data can have difficulty reconstructing dense regions of a network, a consequence of both computational complexity and unreliable parameter estimation when sample size is small. As a result, identification of hub genes is of special difficulty for these methods. Methods: We present a new algorithm, Empirical Light Mutual Min (ELMM), for large network reconstruction that has properties well suited for recovery of graphs with high-degree nodes. ELMM reconstructs the undirected graph of a regulatory network using empirical Bayes conditional independence testing with a heuristic relaxation of independence constraints in dense areas of the graph. This relaxation allows only one gene of a pair with a putative relation to be aware of the network connection, an approach that is aimed at easing multiple testing problems associated with recovering densely connected structures. Results: Using in silico data, we show that ELMM has better performance than commonly used network inference algorithms including GeneNet, ARACNE, FOCI, GENIE3 and GLASSO. We also apply ELMM to reconstruct a network among 5492 genes expressed in human lung airway epithelium of healthy non-smokers, healthy smokers and individuals with chronic obstructive pulmonary disease assayed using microarrays. The analysis identifies dense sub-networks that are consistent with known regulatory relationships in the lung airway and also suggests novel hub regulatory relationships among a number of genes that play roles in oxidative stress and secretion. Availability and implementation: Software for running ELMM is made available at http://mezeylab.cb.bscb.cornell.edu/Software.aspx. Contact: ramimahdi@yahoo.com or jgm45@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22685074

  4. Semi-empirical and empirical L X-ray production cross sections for elements with 50 ≤ Z ≤ 92 for protons of 0.5-3.0 MeV

    International Nuclear Information System (INIS)

    Nekab, M.; Kahoul, A.

    2006-01-01

    We present in this contribution, semi-empirical production cross sections of the main X-ray lines Lα, Lβ and Lγ for elements from Sn to U and for protons with energies varying from 0.5 to 3.0 MeV. The theoretical X-ray production cross sections are firstly calculated from the theoretical ionization cross sections of the Li (i = 1, 2, 3) subshell within the ECPSSR theory. The semi-empirical Lα, Lβ and Lγ cross sections are then deduced by fitting the available experimental data normalized to their corresponding theoretical values and give the better representation of the experimental data in some cases. On the other hand, the experimental data are directly fitted to deduce the empirical L X-ray production cross sections. A comparison is made between the semi-empirical cross sections, the empirical cross sections reported in this work and the empirical ones reported by Reis and Jesus [M.A. Reis, A.P. Jesus, Atom. Data Nucl. Data Tables 63 (1996) 1] and those of Strivay and Weber [Strivay, G. Weber, Nucl. Instr. and Meth. B 190 (2002) 112

  5. Phenomenology and the Empirical Turn

    NARCIS (Netherlands)

    Zwier, Jochem; Blok, Vincent; Lemmens, Pieter

    2016-01-01

    This paper provides a phenomenological analysis of postphenomenological philosophy of technology. While acknowledging that the results of its analyses are to be recognized as original, insightful, and valuable, we will argue that in its execution of the empirical turn, postphenomenology forfeits

  6. Empirical ethics as dialogical practice

    NARCIS (Netherlands)

    Widdershoven, G.A.M.; Abma, T.A.; Molewijk, A.C.

    2009-01-01

    In this article, we present a dialogical approach to empirical ethics, based upon hermeneutic ethics and responsive evaluation. Hermeneutic ethics regards experience as the concrete source of moral wisdom. In order to gain a good understanding of moral issues, concrete detailed experiences and

  7. Empirical processes: theory and applications

    OpenAIRE

    Venturini Sergio

    2005-01-01

    Proceedings of the 2003 Summer School in Statistics and Probability in Torgnon (Aosta, Italy) held by Prof. Jon A. Wellner and Prof. M. Banerjee. The topic presented was the theory of empirical processes with applications to statistics (m-estimation, bootstrap, semiparametric theory).

  8. Worship, Reflection, Empirical Research

    OpenAIRE

    Ding Dong,

    2012-01-01

    In my youth, I was a worshipper of Mao Zedong. From the latter stage of the Mao Era to the early years of Reform and Opening, I began to reflect on Mao and the Communist Revolution he launched. In recent years I’ve devoted myself to empirical historical research on Mao, seeking the truth about Mao and China’s modern history.

  9. Relations between source parameters for large Persian earthquakes

    Directory of Open Access Journals (Sweden)

    Majid Nemati

    2015-11-01

    Full Text Available Empirical relationships for magnitude scales and fault parameters were produced using 436 Iranian intraplate earthquakes of recently regional databases since the continental events represent a large portion of total seismicity of Iran. The relations between different source parameters of the earthquakes were derived using input information which has usefully been provided from the databases after 1900. Suggested equations for magnitude scales relate the body-wave, surface-wave as well as local magnitude scales to scalar moment of the earthquakes. Also, dependence of source parameters as surface and subsurface rupture length and maximum surface displacement on the moment magnitude for some well documented earthquakes was investigated. For meeting this aim, ordinary linear regression procedures were employed for all relations. Our evaluations reveal a fair agreement between obtained relations and equations described in other worldwide and regional works in literature. The M0-mb and M0-MS equations are correlated well to the worldwide relations. Also, both M0-MS and M0-ML relations have a good agreement with regional studies in Taiwan. The equations derived from this study mainly confirm the results of the global investigations about rupture length of historical and instrumental events. However, some relations like MW-MN and MN-ML which are remarkably unlike to available regional works (e.g., American and Canadian were also found.

  10. RIA system programming by means of kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Silberring, J; Golda, W [Akademia Medyczna, Krakow (Poland). Dept. of Endocrinology and Metabolism

    1979-12-01

    Insulin-/sup 125/I antibody reaction was optimized by physical-chemical parameters. After the activation energies Esub(a) and Esub(d)-for association and association, respectively were calculated from the experimental data, the theoretical values of the reaction rate constants ksub(a) and ksub(d) were determined as well as equilibrium constants K. By means of the empirical formulae, the approximate incubation time for the RIA kit and maximal percent of insulin-/sup 125/I binding to antibody (%B) in relation to temperature were computed. The proposed method may be applied to the new antigen-binder systems preparation (new antibodies, shortening of the incubation time, temperature changes, influence of different ions and kind of buffer). (orig.) 891 MG/orig. 892 MBE.

  11. Empirical solution of Green-Ampt equation using soil conservation service - curve number values

    Science.gov (United States)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-09-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular widely used rainfall-runoff model for quantifying the total stream-flow volume generated by storm rainfall, but its application is not appropriate for sub-daily resolutions. In order to overcome this drawback, the Green-Ampt (GA) infiltration equation is considered and an empirical solution is proposed and evaluated. The procedure, named CN4GA (Curve Number for Green-Ampt), aims to calibrate the Green-Ampt model parameters distributing in time the global information provided by the SCS-CN method. The proposed procedure is evaluated by analysing observed rainfall-runoff events; results show that CN4GA seems to provide better agreement with the observed hydrographs respect to the classic SCS-CN method.

  12. Roughness-reflectance relationship of bare desert terrain: An empirical study

    International Nuclear Information System (INIS)

    Shoshany, M.

    1993-01-01

    A study of the bidirectional reflectance distribution function (BRDF) in relation to surface roughness properties was conducted in arid land near Fowlers Gap Research Station, New South Wales, Australia. Such empirical study is necessary for investigating the possibility of determining terrain geomorphological parameters from bidirectional reflectance data. A new apparatus was developed to take accurate hemispherical directions radiance measurements (HDRM). A digitizer for three-dimensional in situ roughness measurements was also developed. More than 70 hemispherical data sets were collected for various illumination conditions and surface types of desert stony pavements and rocky terrain slopes. In general, it was found that most of the surfaces exhibited an anisotropic reflection, combining a major component of backscattering. The BRDF of different surface types in relation to their roughness properties as determined by the field digitizer were then examined. Results showed that sites that are considered to differ significantly from a geomorphological point of view would not necessarily form a different BRDF

  13. Anterior temporal face patches: A meta-analysis and empirical study

    Directory of Open Access Journals (Sweden)

    Rebecca J. Von Der Heide

    2013-02-01

    Full Text Available Studies of nonhuman primates have reported face sensitive patches in the ventral anterior temporal lobes (ATL. In humans, ATL resection or damage causes an associative prosopagnosia in which face perception is intact but face memory is compromised. Some fMRI studies have extended these findings using famous and familiar faces. However, it is unclear whether these regions in the human ATL are in locations comparable to those reported in non-human primates, typically using unfamiliar faces. We present the results of two studies of person memory: a meta-analysis of existing fMRI studies and an empirical fMRI study using optimized imaging parameters. Both studies showed left-lateralized ATL activations to familiar individuals while novel faces activated the right ATL. Activations to famous faces were quite ventral, similar to what has been reported in monkeys. These findings suggest that face memory-sensitive patches in the human ATL are in the ventral/polar ATL.

  14. Analysis of WEDM Process Parameters on Surface Roughness and Kerf using Taguchi Method

    Directory of Open Access Journals (Sweden)

    Asfana Banu

    2017-12-01

    Full Text Available In obtaining the best quality of engineering parts, the quality of machined surface plays an essential role. The fatigue strength, wear resistance, and corrosion of workpiece are some of the aspects of the qualities that can be improved. This paper investigates the effect of wire electrical discharge machining (WEDM process parameters on surface roughness and kerf on stainless steel using distilled water as dielectric fluid and brass wire as tool electrode. The selected process parameters are voltage open, wire speed, wire tension, voltage gap, and off time. Empirical models using Taguchi method were developed for the estimation of surface roughness and kerf. The analysis revealed that off time has major influence on surface roughness and kerf. The optimum machining parameters for minimum surface roughness and kerf were found to be 10 V open voltage, 2.84 µs off time, 12 m/min wire speed, 6.3 N wire tension, and 54.91 V voltage gap.

  15. The Impact of Variability of Selected Geological and Mining Parameters on the Value and Risks of Projects in the Hard Coal Mining Industry

    Science.gov (United States)

    Kopacz, Michał

    2017-09-01

    The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be

  16. Recent extensions and use of the statistical model code EMPIRE-II - version: 2.17 Millesimo

    International Nuclear Information System (INIS)

    Herman, M.

    2003-01-01

    This lecture notes describe new features of the modular code EMPIRE-2.17 designed to perform comprehensive calculations of nuclear reactions using variety of nuclear reaction models. Compared to the version 2.13, the current release has been extended by including Coupled-Channel mechanism, exciton model, Monte Carlo approach to preequilibrium emission, use of microscopic level densities, widths fluctuation correction, detailed calculation of the recoil spectra, and powerful plotting capabilities provided by the ZVView package. The second part of this lecture concentrates on the use of the code in practical calculations, with emphasis on the aspects relevant to nuclear data evaluation. In particular, adjusting model parameters is discussed in details. (author)

  17. Empirical evidence for site coefficients in building code provisions

    Science.gov (United States)

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  18. Empirical research on international environmental migration: a systematic review.

    Science.gov (United States)

    Obokata, Reiko; Veronis, Luisa; McLeman, Robert

    2014-01-01

    This paper presents the findings of a systematic review of scholarly publications that report empirical findings from studies of environmentally-related international migration. There exists a small, but growing accumulation of empirical studies that consider environmentally-linked migration that spans international borders. These studies provide useful evidence for scholars and policymakers in understanding how environmental factors interact with political, economic and social factors to influence migration behavior and outcomes that are specific to international movements of people, in highlighting promising future research directions, and in raising important considerations for international policymaking. Our review identifies countries of migrant origin and destination that have so far been the subject of empirical research, the environmental factors believed to have influenced these migrations, the interactions of environmental and non-environmental factors as well as the role of context in influencing migration behavior, and the types of methods used by researchers. In reporting our findings, we identify the strengths and challenges associated with the main empirical approaches, highlight significant gaps and future opportunities for empirical work, and contribute to advancing understanding of environmental influences on international migration more generally. Specifically, we propose an exploratory framework to take into account the role of context in shaping environmental migration across borders, including the dynamic and complex interactions between environmental and non-environmental factors at a range of scales.

  19. A semi empirical formula for the angular differential number albedo of low-energy photons

    Directory of Open Access Journals (Sweden)

    Marković Srpko

    2005-01-01

    Full Text Available Low-energy photon reflection from water, aluminum, and iron is simulated by the MCNP code and results are com pared with similar Monte Carlo calculations. For the energy range from 60 to 150 keV and for the normal incidence of initial photons, a universal shape of the normalized angular differential number albedo is observed and after that fitted by the curve fit ting procedure in form of a second order polynomial over the polar angle. Finally, a one-parameter formula for the angular differential number albedo is developed and verified for water through the comparison of results with the semi empirical formulae and Monte Carlo calculations of other authors.

  20. Empirical Productivity Indices and Indicators

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2016-01-01

    textabstractThe empirical measurement of productivity change (or difference) by means of indices and indicators starts with the ex post profit/loss accounts of a production unit. Key concepts are profit, leading to indicators, and profitability, leading to indices. The main task for the productivity

  1. Empirical analysis of consumer behavior

    NARCIS (Netherlands)

    Huang, Yufeng

    2015-01-01

    This thesis consists of three essays in quantitative marketing, focusing on structural empirical analysis of consumer behavior. In the first essay, he investigates the role of a consumer's skill of product usage, and its imperfect transferability across brands, in her product choice. It shows that

  2. Appropriate methodologies for empirical bioethics: it's all relative.

    Science.gov (United States)

    Ives, Jonathan; Draper, Heather

    2009-05-01

    In this article we distinguish between philosophical bioethics (PB), descriptive policy orientated bioethics (DPOB) and normative policy oriented bioethics (NPOB). We argue that finding an appropriate methodology for combining empirical data and moral theory depends on what the aims of the research endeavour are, and that, for the most part, this combination is only required for NPOB. After briefly discussing the debate around the is/ought problem, and suggesting that both sides of this debate are misunderstanding one another (i.e. one side treats it as a conceptual problem, whilst the other treats it as an empirical claim), we outline and defend a methodological approach to NPOB based on work we have carried out on a project exploring the normative foundations of paternal rights and responsibilities. We suggest that given the prominent role already played by moral intuition in moral theory, one appropriate way to integrate empirical data and philosophical bioethics is to utilize empirically gathered lay intuition as the foundation for ethical reasoning in NPOB. The method we propose involves a modification of a long-established tradition on non-intervention in qualitative data gathering, combined with a form of reflective equilibrium where the demands of theory and data are given equal weight and a pragmatic compromise reached.

  3. Empirical analyses of price formation in the German electricity market - the devil is in the details; Empirische Analysen der Preisbildung am deutschen Elektrizitaetsmarkt - der Teufel steckt im Detail.

    Energy Technology Data Exchange (ETDEWEB)

    Ellersdorfer, I.; Hundt, M.; Sun Ninghong; Voss, A. [Stuttgart Univ. (DE). Inst. fuer Energiewirtschaft und Rationelle Energieanwendung (IER)

    2008-05-15

    In view of the dramatic rise in wholesale prices over the past years, model-based empirical analyses of price formation in the electricity markets have become an important basis for the discussion on competition policy in Germany and Europe. Empirical analyses are usually performed on the basis of optimising fundamental models which describe the power supply system of a country in greater or lesser detail, thus making it possible to determine how power plants must be deployed so as to cover the electricity demand at the lowest possible cost. The task of determining the difference between market price and incremental cost, a parameter frequently used in competition analyses, is beset with many difficulties of a methodological or empirical nature. The present study undertakes the first ever systematic quantification of the influence of existing uncertainties on the results of the model calculations.

  4. Back to the Future Betas: Empirical Asset Pricing of US and Southeast Asian Markets

    Directory of Open Access Journals (Sweden)

    Jordan French

    2016-07-01

    Full Text Available The study adds an empirical outlook on the predicting power of using data from the future to predict future returns. The crux of the traditional Capital Asset Pricing Model (CAPM methodology is using historical data in the calculation of the beta coefficient. This study instead uses a battery of Generalized Auto Regressive Conditional Heteroskedasticity (GARCH models, of differing lag and parameter terms, to forecast the variance of the market used in the denominator of the beta formula. The covariance of the portfolio and market returns are assumed to remain constant in the time-varying beta calculations. The data spans from 3 January 2005 to 29 December 2014. One ten-year, two five-year, and three three-year sample periods were used, for robustness, with ten different portfolios. Out of sample forecasts, mean absolute error (MAE and mean squared forecast error (MSE were used to compare the forecasting ability of the ex-ante GARCH models, Artificial Neural Network, and the standard market ex-post model. Find that the time-varying MGARCH and SGARCH beta performed better with out-of-sample testing than the other ex-ante models. Although the simplest approach, constant ex-post beta, performed as well or better within this empirical study.

  5. Parameter-free resolution of the superposition of stochastic signals

    Energy Technology Data Exchange (ETDEWEB)

    Scholz, Teresa, E-mail: tascholz@fc.ul.pt [Center for Theoretical and Computational Physics, University of Lisbon (Portugal); Raischel, Frank [Center for Geophysics, IDL, University of Lisbon (Portugal); Closer Consulting, Av. Eng. Duarte Pacheco Torre 1 15" 0, 1070-101 Lisboa (Portugal); Lopes, Vitor V. [DEIO-CIO, University of Lisbon (Portugal); UTEC–Universidad de Ingeniería y Tecnología, Lima (Peru); Lehle, Bernd; Wächter, Matthias; Peinke, Joachim [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Lind, Pedro G. [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Institute of Physics, University of Osnabrück, Osnabrück (Germany)

    2017-01-30

    This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent stochastic processes, one of which is an Ornstein–Uhlenbeck process and the other a general (non-linear) Langevin process. The method is able to distinguish between the stochastic processes, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.

  6. The frontiers of empirical science: A Thomist-inspired critique of ...

    African Journals Online (AJOL)

    2016-07-08

    Jul 8, 2016 ... of scientism, is, however, self-destructive of scientism because contrary to its ... The theory that only empirical facts have epistemic meaning is supported by the ..... (2002:1436). The cyclic model lacks empirical verification,.

  7. An improved empirical dynamic control system model of global mean sea level rise and surface temperature change

    Science.gov (United States)

    Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge

    2018-04-01

    Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.

  8. Standardless quantification approach of TXRF analysis using fundamental parameter method

    International Nuclear Information System (INIS)

    Szaloki, I.; Taniguchi, K.

    2000-01-01

    New standardless evaluation procedure based on the fundamental parameter method (FPM) has been developed for TXRF analysis. The theoretical calculation describes the relationship between characteristic intensities and the geometrical parameters of the excitation, detection system and the specimen parameters: size, thickness, angle of the excitation beam to the surface and the optical properties of the specimen holder. Most of the TXRF methods apply empirical calibration, which requires the application of special preparation technique. However, the characteristic lines of the specimen holder (Si Kα,β) present information from the local excitation and geometrical conditions on the substrate surface. On the basis of the theoretically calculation of the substrate characteristic intensity the excitation beam flux can be approximated. Taking into consideration the elements are in the specimen material a system of non-linear equation can be given involving the unknown concentration values and the geometrical and detection parameters. In order to solve this mathematical problem PASCAL software was written, which calculates the sample composition and the average sample thickness by gradient algorithm. Therefore, this quantitative estimation of the specimen composition requires neither external nor internal standard sample. For verification of the theoretical calculation and the numerical procedure, several experiments were carried out using mixed standard solution containing elements of K, Sc, V, Mn, Co and Cu in 0.1 - 10 ppm concentration range. (author)

  9. Comments on: understanding the Larson--Miller parameter, by F.T. Furillo, S. Purushothaman and J. K. Tien

    International Nuclear Information System (INIS)

    DiMelfi, R.J.

    1978-01-01

    The Larson--Miller parameter has been a useful tool in the handling of creep rupture data. Essentially, this is because, as an empirical scheme, it has so often been successful at correlating such data for a wide range of experimental conditions. Its use has proved invaluable to studies of nuclear fuel cladding embrittlement in reactor environments under diverse loading conditions. The kind of master plot that is generated through the use of this parameter is very helpful for predicting low stress-long time rupture behavior. The basic premise behind the success of the Larson--Miller method lies in the fortuitous elimination of one of the two independent variables that can be controlled during a creep test: the absolute temperature T and the applied stress sigma. When a test sample is stressed at temperature, it will fail after a time t/sub r/, depending upon the values of some unspecified, and usually unknown, material parameters. Larson and Miller (2) presented a simple expression involving these creep rupture variables which is only a function of the applied stress: LMP = T(C+log tr) = f(sigma). The quantity C is specified as a material constant, but is roughly 20 for a wide variety of commercial alloys. The elimination of the temperature dependence allows one to construct master rupture curves. This is the power of the Larson--Miller method. Once the function F(sigma) is determined graphically, regardless of its algebraic form, the stress is needed in order to know LMP. Knowing LMP, one can calculate t/sub r/ as a function of temperature. The paper under discussion does not serve to further an understanding of this useful empirical parameter. In fact, their derivation leads to a parameter similar in form to Larson and Miller's, except that it is a function of both stress and temperature, thereby defeating the purpose of the method

  10. Empirical Analysis of Closed-Loop Duopoly Advertising Strategies

    OpenAIRE

    Gary M. Erickson

    1992-01-01

    Closed-loop (perfect) equilibria in a Lanchester duopoly differential game of advertising competition are used as the basis for empirical investigation. Two systems of simultaneous nonlinear equations are formed, one from a general Lanchester model and one from a constrained model. Two empirical applications are conducted. In one involving Coca-Cola and Pepsi-Cola, a formal statistical testing procedure is used to detect whether closed-loop equilibrium advertising strategies are used by the c...

  11. Development of efficient air-cooling strategies for lithium-ion battery module based on empirical heat source model

    International Nuclear Information System (INIS)

    Wang, Tao; Tseng, K.J.; Zhao, Jiyun

    2015-01-01

    Thermal modeling is the key issue in thermal management of lithium-ion battery system, and cooling strategies need to be carefully investigated to guarantee the temperature of batteries in operation within a narrow optimal range as well as provide cost effective and energy saving solutions for cooling system. This article reviews and summarizes the past cooling methods especially forced air cooling and introduces an empirical heat source model which can be widely applied in the battery module/pack thermal modeling. In the development of empirical heat source model, three-dimensional computational fluid dynamics (CFD) method is employed, and thermal insulation experiments are conducted to provide the key parameters. A transient thermal model of 5 × 5 battery module with forced air cooling is then developed based on the empirical heat source model. Thermal behaviors of battery module under different air cooling conditions, discharge rates and ambient temperatures are characterized and summarized. Varies cooling strategies are simulated and compared in order to obtain an optimal cooling method. Besides, the battery fault conditions are predicted from transient simulation scenarios. The temperature distributions and variations during discharge process are quantitatively described, and it is found that the upper limit of ambient temperature for forced air cooling is 35 °C, and when ambient temperature is lower than 20 °C, forced air-cooling is not necessary. - Highlights: • An empirical heat source model is developed for battery thermal modeling. • Different air-cooling strategies on module thermal characteristics are investigated. • Impact of different discharge rates on module thermal responses are investigated. • Impact of ambient temperatures on module thermal behaviors are investigated. • Locations of maximum temperatures under different operation conditions are studied.

  12. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  13. Modeling ionospheric foF2 by using empirical orthogonal function analysis

    Directory of Open Access Journals (Sweden)

    E. A

    2011-08-01

    Full Text Available A similar-parameters interpolation method and an empirical orthogonal function analysis are used to construct empirical models for the ionospheric foF2 by using the observational data from three ground-based ionosonde stations in Japan which are Wakkanai (Geographic 45.4° N, 141.7° E, Kokubunji (Geographic 35.7° N, 140.1° E and Yamagawa (Geographic 31.2° N, 130.6° E during the years of 1971–1987. The impact of different drivers towards ionospheric foF2 can be well indicated by choosing appropriate proxies. It is shown that the missing data of original foF2 can be optimal refilled using similar-parameters method. The characteristics of base functions and associated coefficients of EOF model are analyzed. The diurnal variation of base functions can reflect the essential nature of ionospheric foF2 while the coefficients represent the long-term alteration tendency. The 1st order EOF coefficient A1 can reflect the feature of the components with solar cycle variation. A1 also contains an evident semi-annual variation component as well as a relatively weak annual fluctuation component. Both of which are not so obvious as the solar cycle variation. The 2nd order coefficient A2 contains mainly annual variation components. The 3rd order coefficient A3 and 4th order coefficient A4 contain both annual and semi-annual variation components. The seasonal variation, solar rotation oscillation and the small-scale irregularities are also included in the 4th order coefficient A4. The amplitude range and developing tendency of all these coefficients depend on the level of solar activity and geomagnetic activity. The reliability and validity of EOF model are verified by comparison with observational data and with International Reference Ionosphere (IRI. The agreement between observations and EOF model is quite well, indicating that the EOF model can reflect the major changes and the temporal distribution characteristics of the mid-latitude ionosphere of the

  14. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  15. Theoretical and Empirical Descriptions of Thermospheric Density

    Science.gov (United States)

    Solomon, S. C.; Qian, L.

    2004-12-01

    The longest-term and most accurate overall description the density of the upper thermosphere is provided by analysis of change in the ephemeris of Earth-orbiting satellites. Empirical models of the thermosphere developed in part from these measurements can do a reasonable job of describing thermospheric properties on a climatological basis, but the promise of first-principles global general circulation models of the coupled thermosphere/ionosphere system is that a true high-resolution, predictive capability may ultimately be developed for thermospheric density. However, several issues are encountered when attempting to tune such models so that they accurately represent absolute densities as a function of altitude, and their changes on solar-rotational and solar-cycle time scales. Among these are the crucial ones of getting the heating rates (from both solar and auroral sources) right, getting the cooling rates right, and establishing the appropriate boundary conditions. However, there are several ancillary issues as well, such as the problem of registering a pressure-coordinate model onto an altitude scale, and dealing with possible departures from hydrostatic equilibrium in empirical models. Thus, tuning a theoretical model to match empirical climatology may be difficult, even in the absence of high temporal or spatial variation of the energy sources. We will discuss some of the challenges involved, and show comparisons of simulations using the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) to empirical model estimates of neutral thermosphere density and temperature. We will also show some recent simulations using measured solar irradiance from the TIMED/SEE instrument as input to the TIE-GCM.

  16. EMPIRICAL RESEARCH AND CONGREGATIONAL ANALYSIS ...

    African Journals Online (AJOL)

    empirical research has made to the process of congregational analysis. 1 Part of this ... contextual congegrational analysis – meeting social and divine desires”) at the IAPT .... methodology of a congregational analysis should be regarded as a process. ... essential to create space for a qualitative and quantitative approach.

  17. Empirical evidence for multi-scaled controls on wildfire size distributions in California

    Science.gov (United States)

    Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.

    2014-12-01

    Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California

  18. Chimera states in brain networks: Empirical neural vs. modular fractal connectivity

    Science.gov (United States)

    Chouzouris, Teresa; Omelchenko, Iryna; Zakharova, Anna; Hlinka, Jaroslav; Jiruska, Premysl; Schöll, Eckehard

    2018-04-01

    Complex spatiotemporal patterns, called chimera states, consist of coexisting coherent and incoherent domains and can be observed in networks of coupled oscillators. The interplay of synchrony and asynchrony in complex brain networks is an important aspect in studies of both the brain function and disease. We analyse the collective dynamics of FitzHugh-Nagumo neurons in complex networks motivated by its potential application to epileptology and epilepsy surgery. We compare two topologies: an empirical structural neural connectivity derived from diffusion-weighted magnetic resonance imaging and a mathematically constructed network with modular fractal connectivity. We analyse the properties of chimeras and partially synchronized states and obtain regions of their stability in the parameter planes. Furthermore, we qualitatively simulate the dynamics of epileptic seizures and study the influence of the removal of nodes on the network synchronizability, which can be useful for applications to epileptic surgery.

  19. Empirical method to measure stochasticity and multifractality in nonlinear time series

    Science.gov (United States)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  20. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  1. Improved accuracy in estimation of left ventricular function parameters from QGS software with Tc-99m tetrofosmin gated-SPECT. A multivariate analysis

    International Nuclear Information System (INIS)

    Okizaki, Atsutaka; Shuke, Noriyuki; Sato, Junichi; Ishikawa, Yukio; Yamamoto, Wakako; Kikuchi, Kenjiro; Aburano, Tamio

    2003-01-01

    The purpose of this study was to verify whether the accuracy of left ventricular parameters related to left ventricular function from gated-SPECT improved or not, using multivariate analysis. Ninety-six patients with cardiovascular diseases were studied. Gated-SPECT with the quantitative gated SPECT (QGS) software and left ventriculography (LVG) were performed to obtain left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV). Then, multivariate analyses were performed to determine empirical formulas for predicting these parameters. The calculated values of left ventricular parameters were compared with those obtained directly from the QGS software and LVG. Multivariate analyses were able to improve accuracy in estimation of LVEF, EDV and ESV. Statistically significant improvement was seen in LVEF (from r=0.6965 to r=0.8093, p<0.05). Although not statistically significant, improvements in correlation coefficients were seen in EDV (from r=0.7199 to r=0.7595, p=0.2750) and ESV (from r=0.5694 to r=0.5871, p=0.4281). The empirical equations with multivariate analysis improved the accuracy in estimating LVEF from gated-SPECT with the QGS software. (author)

  2. A Multicenter Evaluation of Prolonged Empiric Antibiotic Therapy in Adult ICUs in the United States.

    Science.gov (United States)

    Thomas, Zachariah; Bandali, Farooq; Sankaranarayanan, Jayashri; Reardon, Tom; Olsen, Keith M

    2015-12-01

    The purpose of this study is to determine the rate of prolonged empiric antibiotic therapy in adult ICUs in the United States. Our secondary objective is to examine the relationship between the prolonged empiric antibiotic therapy rate and certain ICU characteristics. Multicenter, prospective, observational, 72-hour snapshot study. Sixty-seven ICUs from 32 hospitals in the United States. Nine hundred ninety-eight patients admitted to the ICU between midnight on June 20, 2011, and June 21, 2011, were included in the study. None. Antibiotic orders were categorized as prophylactic, definitive, empiric, or prolonged empiric antibiotic therapy. Prolonged empiric antibiotic therapy was defined as empiric antibiotics that continued for at least 72 hours in the absence of adjudicated infection. Standard definitions from the Centers for Disease Control and Prevention were used to determine infection. Prolonged empiric antibiotic therapy rate was determined as the ratio of the total number of empiric antibiotics continued for at least 72 hours divided by the total number of empiric antibiotics. Univariate analysis of factors associated with the ICU prolonged empiric antibiotic therapy rate was conducted using Student t test. A total of 660 unique antibiotics were prescribed as empiric therapy to 364 patients. Of the empiric antibiotics, 333 of 660 (50%) were continued for at least 72 hours in instances where Centers for Disease Control and Prevention infection criteria were not met. Suspected pneumonia accounted for approximately 60% of empiric antibiotic use. The most frequently prescribed empiric antibiotics were vancomycin and piperacillin/tazobactam. ICUs that utilized invasive techniques for the diagnosis of ventilator-associated pneumonia had lower rates of prolonged empiric antibiotic therapy than those that did not, 45.1% versus 59.5% (p = 0.03). No other institutional factor was significantly associated with prolonged empiric antibiotic therapy rate. Half of all

  3. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    Science.gov (United States)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  4. Establishing empirical relationships to predict porosity level and corrosion rate of atmospheric plasma-sprayed alumina coatings on AZ31B magnesium alloy

    Directory of Open Access Journals (Sweden)

    D. Thirumalaikumarasamy

    2014-06-01

    Full Text Available Plasma sprayed ceramic coatings are successfully used in many industrial applications, where high wear and corrosion resistance with thermal insulation are required. In this work, empirical relationships were developed to predict the porosity and corrosion rate of alumina coatings by incorporating independently controllable atmospheric plasma spray operational parameters (input power, stand-off distance and powder feed rate using response surface methodology (RSM. A central composite rotatable design with three factors and five levels was chosen to minimize the number of experimental conditions. Within the scope of the design space, the input power and the stand-off distance appeared to be the most significant two parameters affecting the responses among the three investigated process parameters. A linear regression relationship was also established between porosity and corrosion rate of the alumina coatings. Further, sensitivity analysis was carried out and compared with the relative impact of three process parameters on porosity level and corrosion rate to verify the measurement errors on the values of the uncertainty in estimated parameters.

  5. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  6. Psychological Models of Art Reception must be Empirically Grounded

    DEFF Research Database (Denmark)

    Nadal, Marcos; Vartanian, Oshin; Skov, Martin

    2017-01-01

    We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...... hypotheses. We argue that theories about art experiences should be based on empirical evidence....

  7. Empirical evaluation and justification of methodologies in psychological science.

    Science.gov (United States)

    Proctor, R W; Capaldi, E J

    2001-11-01

    The purpose of this article is to describe a relatively new movement in the history and philosophy of science, naturalism, a form of pragmatism emphasizing that methodological principles are empirical statements. Thus, methodological principles must be evaluated and justified on the same basis as other empirical statements. On this view, methodological statements may be less secure than the specific scientific theories to which they give rise. The authors examined the feasibility of a naturalistic approach to methodology using logical and historical analysis and by contrasting theories that predict new facts versus theories that explain already known facts. They provide examples of how differences over methodological issues in psychology and in science generally may be resolved using a naturalistic, or empirical, approach.

  8. Computing as Empirical Science – Evolution of a Concept

    Directory of Open Access Journals (Sweden)

    Polak Paweł

    2016-12-01

    Full Text Available This article presents the evolution of philosophical and methodological considerations concerning empiricism in computer/computing science. In this study, we trace the most important current events in the history of reflection on computing. The forerunners of Artificial Intelligence H.A. Simon and A. Newell in their paper Computer Science As Empirical Inquiry (1975 started these considerations. Later the concept of empirical computer science was developed by S.S. Shapiro, P. Wegner, A.H. Eden and P.J. Denning. They showed various empirical aspects of computing. This led to a view of the science of computing (or science of information processing - the science of general scope. Some interesting contemporary ways towards a generalized perspective on computations were also shown (e.g. natural computing.

  9. Global and nonglobal parameters of horizontal-branch morphology of globular clusters

    International Nuclear Information System (INIS)

    Milone, A. P.; Marino, A. F.; Dotter, A.; Norris, J. E.; Jerjen, H.; Asplund, M.

    2014-01-01

    The horizontal-branch (HB) morphology of globular clusters (GCs) is mainly determined by metallicity. However, the fact that GCs with almost the same metallicity exhibit different HB morphologies demonstrates that at least one more parameter is needed to explain the HB morphology. It has been suggested that one of these should be a global parameter that varies from GC to GC and the other a nonglobal parameter that varies within the GC. In this study we provide empirical evidence corroborating this idea. We used the photometric catalogs obtained with the Advanced Camera for Surveys of the Hubble Space Telescope and analyze the color-magnitude diagrams of 74 GCs. The HB morphology of our sample of GCs has been investigated on the basis of the two new parameters L1 and L2 that measure the distance between the red giant branch and the coolest part of the HB and the color extension of the HB, respectively. We find that L1 correlates with both metallicity and age, whereas L2 most strongly correlates with the mass of the hosting GC. The range of helium abundance among the stars in a GC, characterized by ΔY and associated with the presence of multiple stellar populations, has been estimated in a few GCs to date. In these GCs we find a close relationship among ΔY, GC mass, and L2. We conclude that age and metallicity are the main global parameters, while the range of helium abundance within a GC is the main nonglobal parameter defining the HB morphology of Galactic GCs.

  10. On the Empirical Evidence of Mutual Fund Strategic Risk Taking

    NARCIS (Netherlands)

    Goriaev, A.P.; Nijman, T.E.; Werker, B.J.M.

    2001-01-01

    We reexamine empirical evidence on strategic risk-taking behavior by mutual fund managers.Several studies suggest that fund performance in the first semester of a year influences risk-taking in the second semester.However, we show that previous empirical studies implicitly assume that idiosyncratic

  11. Estimation of static parameters based on dynamical and physical properties in limestone rocks

    Science.gov (United States)

    Ghafoori, Mohammad; Rastegarnia, Ahmad; Lashkaripour, Gholam Reza

    2018-01-01

    Due to the importance of uniaxial compressive strength (UCS), static Young's modulus (ES) and shear wave velocity, it is always worth to predict these parameters from empirical relations that suggested for other formations with same lithology. This paper studies the physical, mechanical and dynamical properties of limestone rocks using the results of laboratory tests which carried out on 60 the Jahrum and the Asmari formations core specimens. The core specimens were obtained from the Bazoft dam site, hydroelectric supply and double-curvature arch dam in Iran. The Dynamic Young's modulus (Ed) and dynamic Poisson ratio were calculated using the existing relations. Some empirical relations were presented to estimate uniaxial compressive strength, as well as static Young's modulus and shear wave velocity (Vs). Results showed the static parameters such as uniaxial compressive strength and static Young's modulus represented low correlation with water absorption. It is also found that the uniaxial compressive strength and static Young's modulus had high correlation with compressional wave velocity and dynamic Young's modulus, respectively. Dynamic Young's modulus was 5 times larger than static Young's modulus. Further, the dynamic Poisson ratio was 1.3 times larger than static Poisson ratio. The relationship between shear wave velocity (Vs) and compressional wave velocity (Vp) was power and positive with high correlation coefficient. Prediction of uniaxial compressive strength based on Vp was better than that based on Vs . Generally, both UCS and static Young's modulus (ES) had good correlation with Ed.

  12. Correlations between plastic deformation parameters and radiation detector quality in HgI2

    International Nuclear Information System (INIS)

    Georgeson, G.; Milstein, F.; California Univ., Santa Barbara

    1989-01-01

    Mercuric iodide radiation detectors of various grades of quality were subjected to shearing forces in the (001) crystallographic planes using a specially designed micromechanical shear testing fixture. Experimental measurements were made of (001) shear stress versus shear strain. Each of the stress-strain curves was described by two empirically determined deformation parameters, s 0 and σ, where s 0 is a measure of 'bulk yielding' and σ indicates the 'sharpness of yielding' during plastic deformation. It was observed that the deformation parameters of many HgI 2 single crystal samples fit the relation s 0 =8σ 2/3 and that significant deviation from this relation, with s 0 >8σ 2/3 , indicates poor detector quality. Work hardening by prior plastic deformation was also found to cause s 0 to depart (in an increasing manner) from the 8σ 2/3 relation. For good quality material that has not previously been plastically deformed, the deformation parameter s c =s 0 -2σ<19 psi; this parameter can be interpreted as the 'onset of plastic yielding'. The results are discussed in terms of dislocation mechanisms for plastic deformation, work hardening, and recovery of work hardening. (orig.)

  13. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic; Nouy, Anthony

    2017-01-01

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  14. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic

    2017-06-30

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  15. Hour-Ahead Wind Speed and Power Forecasting Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Ying-Yi Hong

    2013-11-01

    Full Text Available Operation of wind power generation in a large farm is quite challenging in a smart grid owing to uncertain weather conditions. Consequently, operators must accurately forecast wind speed/power in the dispatch center to carry out unit commitment, real power scheduling and economic dispatch. This work presents a novel method based on the integration of empirical mode decomposition (EMD with artificial neural networks (ANN to forecast the short-term (1 h ahead wind speed/power. First, significant parameters for training the ANN are identified using the correlation coefficients. These significant parameters serve as inputs of the ANN. Owing to the volatile and intermittent wind speed/power, the historical time series of wind speed/power is decomposed into several intrinsic mode functions (IMFs and a residual function through EMD. Each IMF becomes less volatile and therefore increases the accuracy of the neural network. The final forecasting results are achieved by aggregating all individual forecasting results from all IMFs and their corresponding residual functions. Real data related to the wind speed and wind power measured at a wind-turbine generator in Taiwan are used for simulation. The wind speed forecasting and wind power forecasting for the four seasons are studied. Comparative studies between the proposed method and traditional methods (i.e., artificial neural network without EMD, autoregressive integrated moving average (ARIMA, and persistence method are also introduced.

  16. 'Nobody tosses a dwarf!' The relation between the empirical and the normative reexamined.

    Science.gov (United States)

    Leget, Carlo; Borry, Pascal; de Vries, Raymond

    2009-05-01

    This article discusses the relation between empirical and normative approaches in bioethics. The issue of dwarf tossing, while admittedly unusual, is chosen as a point of departure because it challenges the reader to look with fresh eyes upon several central bioethical themes, including human dignity, autonomy, and the protection of vulnerable people. After an overview of current approaches to the integration of empirical and normative ethics, we consider five ways that the empirical and normative can be brought together to speak to the problem of dwarf tossing: prescriptive applied ethics, theoretical ethics, critical applied ethics, particularist ethics and integrated empirical ethics. We defend a position of critical applied ethics that allows for a two-way relation between empirical and normative theories. Against efforts fully to integrate the normative and the empirical into one synthesis, we propose that the two should stand in tension and relation to one another. The approach we endorse acknowledges that a social practice can and should be judged both by the gathering of empirical data and by normative ethics. Critical applied ethics uses a five stage process that includes: (a) determination of the problem, (b) description of the problem, (c) empirical study of effects and alternatives, (d) normative weighing and (e) evaluation of the effects of a decision. In each stage, we explore the perspective from both the empirical (sociological) and the normative ethical point of view. We conclude by applying our five-stage critical applied ethics to the example of dwarf tossing.

  17. A new empirical formula for 14-15 MeV neutron-induced (n, p) reaction cross sections

    International Nuclear Information System (INIS)

    Tel, E; Sarer, B; Okuducu, S; Aydin, A; Tanir, G

    2003-01-01

    In this study, we have suggested a new empirical formula to reproduce the cross sections of the (n, p) reactions at 14-15 MeV for the neutron incident energy. This formula obtained using the asymmetry parameters represents a modification to the original formula of Levkovskii. The resulting modified formulae yielded cross sections, representing smaller χ 2 deviations from experimental values, and values much closer to unity as compared with those calculated using Levkovskii's original formula. The results obtained have been discussed and compared with the existing formulae, and found to be well in agreement, when used to correlate the available experimental σ(n, p) data of different nuclei

  18. Calculations of the electronic levels, spin-Hamiltonian parameters and vibrational spectra for the CrCl{sub 3} layered crystals

    Energy Technology Data Exchange (ETDEWEB)

    Avram, C.N. [Faculty of Physics, West University of Timisoara, Bd. V. Parvan No. 4, 300223 Timisoara (Romania); Gruia, A.S., E-mail: adigruia@yahoo.com [Faculty of Physics, West University of Timisoara, Bd. V. Parvan No. 4, 300223 Timisoara (Romania); Brik, M.G. [College of Sciences, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Institute of Physics, University of Tartu, Ravila 14C, Tartu 50411 (Estonia); Institute of Physics, Jan Dlugosz University, Armii Krajowej 13/15, PL-42200 Czestochowa (Poland); Institute of Physics, Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw (Poland); Barb, A.M. [Faculty of Physics, West University of Timisoara, Bd. V. Parvan No. 4, 300223 Timisoara (Romania)

    2015-12-01

    Calculations of the Cr{sup 3+} energy levels, spin-Hamiltonian parameters and vibrational spectra for the layered CrCl{sub 3} crystals are reported for the first time. The crystal field parameters and the energy level scheme were calculated in the framework of the Exchange Charge Model of crystal field. The spin-Hamiltonian parameters (zero-field splitting parameter D and g-factors) for Cr{sup 3+} ion in CrCl{sub 3} crystals were obtained using two independent techniques: i) semi-empirical crystal field theory and ii) density functional theory (DFT)-based model. In the first approach, the spin-Hamiltonian parameters were calculated from the perturbation theory method and the complete diagonalization (of energy matrix) method. The infrared (IR) and Raman frequencies were calculated for both experimental and fully optimized geometry of the crystal structure, using CRYSTAL09 software. The obtained results are discussed and compared with the experimental available data.

  19. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  20. Gazprom the new russian empire

    International Nuclear Information System (INIS)

    Cosnard, D.

    2004-01-01

    The author analyzes the economical and political impacts of the great Gazprom group, leader in the russian energy domain, in Russia. Already number one of the world gas industry, this Group is becoming the right-hand of the Kremlin. Thus the author wonders on this empire transparency and limits. (A.L.B.)

  1. Collective Labour Supply, Taxes, and Intrahousehold Allocation: An Empirical Approach

    NARCIS (Netherlands)

    Bloemen, H.G.

    2017-01-01

    Most empirical studies of the impact of labour income taxation on the labour supply behaviour of households use a unitary modelling approach. In this paper we empirically analyze income taxation and the choice of working hours by combining the collective approach for household behaviour and the

  2. Pairs of chalcogen impurities in silicon

    International Nuclear Information System (INIS)

    Paula Junior, H.F. de.

    1988-01-01

    The electronic structure of complex defects in silicon involving oxygen and sulfur (O-O, S-O and S-S), occupying different positions in the host crystal is studied. It is shown that the many-electron effects (via configuration interaction) are important to describe the correct ground state. The orbital base set is obtained through the LCAO-MO-INDO/S method. (author) [pt

  3. Empirical evaluation of cross-site reproducibility in radiomic features for characterizing prostate MRI

    Science.gov (United States)

    Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish

    2018-02-01

    The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.

  4. HMM filtering and parameter estimation of an electricity spot price model

    International Nuclear Information System (INIS)

    Erlwein, Christina; Benth, Fred Espen; Mamon, Rogemar

    2010-01-01

    In this paper we develop a model for electricity spot price dynamics. The spot price is assumed to follow an exponential Ornstein-Uhlenbeck (OU) process with an added compound Poisson process. In this way, the model allows for mean-reversion and possible jumps. All parameters are modulated by a hidden Markov chain in discrete time. They are able to switch between different economic regimes representing the interaction of various factors. Through the application of reference probability technique, adaptive filters are derived, which in turn, provide optimal estimates for the state of the Markov chain and related quantities of the observation process. The EM algorithm is applied to find optimal estimates of the model parameters in terms of the recursive filters. We implement this self-calibrating model on a deseasonalised series of daily spot electricity prices from the Nordic exchange Nord Pool. On the basis of one-step ahead forecasts, we found that the model is able to capture the empirical characteristics of Nord Pool spot prices. (author)

  5. DEVELOPMENT OF QUARRY SOLUTION VERSION 1.0 FOR QUICK COMPUTATION OF DRILLING AND BLASTING PARAMETERS

    OpenAIRE

    B. ADEBAYO; A. W. BELLO

    2014-01-01

    Computation of drilling cost, quantity of explosives and blasting cost are routine procedure in Quarry and all these parameters are estimated manually in most of the quarries in Nigeria. This paper deals with the development of application package QUARRY SOLUTION Version 1.0 for quarries using Visual Basic 6.0. In order to achieve this data were obtained from the quarry such as drilling and blasting activities. Also, empirical formulae developed by different researchers were used for computat...

  6. Empirical Fit to Inelastic Electron-Deuteron and Electron-Neutron Resonance Region Transverse Cross Sections

    International Nuclear Information System (INIS)

    Peter Bosted; M. E. Christy

    2007-01-01

    An empirical fit is described to measurements of inclusive inelastic electron-deuteron cross sections in the kinematic range of four-momentum transfer 0 (le) Q 2 2 and final state invariant mass 1.2 p of longitudinal to transverse cross sections for the proton, and the assumption R p =R n . The underlying fit parameters describe the average cross section for proton and neutron, with a plane-wave impulse approximation (PWIA) used to fit to the deuteron data. Pseudo-data from MAID 2007 were used to constrain the average nucleon cross sections for W<1.2 GeV. The mean deviation of data from the fit is 3%, with less than 5% of the data points deviating from the fit by more than 10%

  7. Empirical and theoretical challenges in aboveground-belowground ecology

    DEFF Research Database (Denmark)

    W.H. van der Putten,; R.D. Bardgett; P.C. de Ruiter

    2009-01-01

    of the current conceptual succession models into more predictive models can help targeting empirical studies and generalising their results. Then, we discuss how understanding succession may help to enhance managing arable crops, grasslands and invasive plants, as well as provide insights into the effects...... and environmental settings, we explore where and how they can be supported by theoretical approaches to develop testable predictions and to generalise empirical results. We review four key areas where a combined aboveground-belowground approach offers perspectives for enhancing ecological understanding, namely...

  8. Unveiling the checkered fortunes of the Ottoman Empire

    OpenAIRE

    Dimitrova-Grajzl, Valentina

    2013-01-01

    The Ottoman Empire has been predominantly viewed as the ćSick Man of Europe.ć The question arises, however, how this perceived inefficiency can be reconciled with the long existence and prosperity of the Empire. I argue that the Ottoman system could have been efficient subject to constraints. More specifically, I explore the role of the technology of predation and the adherence to the law in determining relative changes in the social order and the power of the Sultan, which in turn led to the...

  9. Biological data assimilation for parameter estimation of a phytoplankton functional type model for the western North Pacific

    Science.gov (United States)

    Hoshiba, Yasuhiro; Hirata, Takafumi; Shigemitsu, Masahito; Nakano, Hideyuki; Hashioka, Taketo; Masuda, Yoshio; Yamanaka, Yasuhiro

    2018-06-01

    Ecosystem models are used to understand ecosystem dynamics and ocean biogeochemical cycles and require optimum physiological parameters to best represent biological behaviours. These physiological parameters are often tuned up empirically, while ecosystem models have evolved to increase the number of physiological parameters. We developed a three-dimensional (3-D) lower-trophic-level marine ecosystem model known as the Nitrogen, Silicon and Iron regulated Marine Ecosystem Model (NSI-MEM) and employed biological data assimilation using a micro-genetic algorithm to estimate 23 physiological parameters for two phytoplankton functional types in the western North Pacific. The estimation of the parameters was based on a one-dimensional simulation that referenced satellite data for constraining the physiological parameters. The 3-D NSI-MEM optimized by the data assimilation improved the timing of a modelled plankton bloom in the subarctic and subtropical regions compared to the model without data assimilation. Furthermore, the model was able to improve not only surface concentrations of phytoplankton but also their subsurface maximum concentrations. Our results showed that surface data assimilation of physiological parameters from two contrasting observatory stations benefits the representation of vertical plankton distribution in the western North Pacific.

  10. Teaching "Empire of the Sun."

    Science.gov (United States)

    Riet, Fred H. van

    1990-01-01

    A Dutch teacher presents reading, film viewing, and writing activities for "Empire of the Sun," J. G. Ballard's autobiographical account of life as a boy in Shanghai and in a Japanese internment camp during World War II (the subject of Steven Spielberg's film of the same name). Includes objectives, procedures, and several literature,…

  11. Empirical Specification of Utility Functions.

    Science.gov (United States)

    Mellenbergh, Gideon J.

    Decision theory can be applied to four types of decision situations in education and psychology: (1) selection; (2) placement; (3) classification; and (4) mastery. For the application of the theory, a utility function must be specified. Usually the utility function is chosen on a priori grounds. In this paper methods for the empirical assessment…

  12. Pluvials, Droughts, the Mongol Empire, and Modern Mongolia

    Science.gov (United States)

    Hessl, A. E.; Pederson, N.; Baatarbileg, N.; Anchukaitis, K. J.

    2013-12-01

    Understanding the connections between climate, ecosystems, and society during historical and modern climatic transitions requires annual resolution records with high fidelity climate signals. Many studies link the demise of complex societies with deteriorating climate conditions, but few have investigated the connection between climate, surplus energy, and the rise of empires. Inner Asia in the 13th century underwent a major political transformation requiring enormous energetic inputs that altered human history. The Mongol Empire, centered on the city of Karakorum, became the largest contiguous land empire in world history (Fig. 1 inset). Powered by domesticated grazing animals, the empire grew at the expense of sedentary agriculturalists across Asia, the Middle East, and Eastern Europe. Although some scholars and conventional wisdom agree that dry conditions spurred the Mongol conquests, little paleoenvironmental data at annual resolution are available to evaluate the role of climate in the development of the Mongol Empire. Here we present a 2600 year tree-ring reconstruction of warm-season, self-calibrating Palmer Drought Severity Index (scPDSI), a measure of water balance, derived from 107 live and dead Siberian pine (Pinus sibirica) trees growing on a Holocene lava flow in central Mongolia. Trees growing on the Khorgo lava flow today are stunted and widely spaced, occurring on microsites with little to no soil development. These trees are extremely water-stressed and their radial growth is well-correlated with both drought (scPDSI) and grassland productivity (Normalized Difference Vegetation Index (NDVI)). Our reconstruction, calibrated and validated on instrumental June-September scPDSI (1959-2009) accounts for 55.8% of the variability in the regional scPDSI when 73% of the annual rainfall occurs. Our scPDSI reconstruction places historic and modern social change in Mongolia in the context of the range of climatic variability during the Common Era. Our record

  13. Porphyry of Russian Empires in Paris

    Science.gov (United States)

    Bulakh, Andrey

    2014-05-01

    Porphyry of Russian Empires in Paris A. G. Bulakh (St Petersburg State University, Russia) So called "Schokhan porphyry" from Lake Onega, Russia, belongs surely to stones of World cultural heritage. One can see this "porphyry" at facades of a lovely palace of Pavel I and in pedestal of the monument after Nicolas I in St Petersburg. There are many other cases of using this stone in Russia. In Paris, sarcophagus of Napoleon I Bonaparte is constructed of blocks of this stone. Really, it is Proterozoic quartzite. Geology situation, petrography and mineralogical characteristic will be reported too. Comparison with antique porphyre from the Egyptian Province of the Roma Empire is given. References: 1) A.G.Bulakh, N.B.Abakumova, J.V.Romanovsky. St Petersburg: a History in Stone. 2010. Print House of St Petersburg State University. 173 p.

  14. Empirical evaluation of lung solubilities of airborne contamination at Harwell facilities

    International Nuclear Information System (INIS)

    Bull, R. K.; Wilson, G.

    2011-01-01

    Lung solubility is the key parameter in determining intakes and doses from inhalation of airborne contamination. However, information on lung solubility can be difficult to acquire, particularly for the historical exposures that are of relevance to lifetime-dose reconstruction. In this study, an empirical approach has been made in which over 200 dose assessments, mainly for Pu and Am, from the period 1986 to 2005 were re-evaluated and the solubility mix required for the best fit to the data was determined. The average of these solubility mixtures for any building or facility can be used as the default solubility for retrospective dose assessments for that facility. Results are presented for a radiochemistry facility, a materials development facility and a waste-storage/handling building at Harwell. The latter two areas are characterised by aerosols that are predominantly insoluble (type S), whereas the radiochemistry facility has a heterogeneous mixture of insoluble and soluble aerosols. The implications of these results for dose reconstruction are discussed in the paper. (authors)

  15. Transient dynamic and modeling parameter sensitivity analysis of 1D solid oxide fuel cell model

    International Nuclear Information System (INIS)

    Huangfu, Yigeng; Gao, Fei; Abbas-Turki, Abdeljalil; Bouquain, David; Miraoui, Abdellatif

    2013-01-01

    Highlights: • A multiphysics, 1D, dynamic SOFC model is developed. • The presented model is validated experimentally in eight different operating conditions. • Electrochemical and thermal dynamic transient time expressions are given in explicit forms. • Parameter sensitivity is discussed for different semi-empirical parameters in the model. - Abstract: In this paper, a multiphysics solid oxide fuel cell (SOFC) dynamic model is developed by using a one dimensional (1D) modeling approach. The dynamic effects of double layer capacitance on the electrochemical domain and the dynamic effect of thermal capacity on thermal domain are thoroughly considered. The 1D approach allows the model to predict the non-uniform distributions of current density, gas pressure and temperature in SOFC during its operation. The developed model has been experimentally validated, under different conditions of temperature and gas pressure. Based on the proposed model, the explicit time constant expressions for different dynamic phenomena in SOFC have been given and discussed in detail. A parameters sensitivity study has also been performed and discussed by using statistical Multi Parameter Sensitivity Analysis (MPSA) method, in order to investigate the impact of parameters on the modeling accuracy

  16. Empirical prediction of ash deposition propensities in coal-fired utilities

    Energy Technology Data Exchange (ETDEWEB)

    Frandsen, F.

    1997-01-01

    This report contain an outline of some of the ash chemistry indices utilized in the EPREDEPO (Empirical PREdiction of DEPOsition) PC-program, version 1.0 (DEPO10), developed by Flemming Frandsen, The CHEC Research Programme, at the Department of Chemical Engineering, Technical University of Denmark. DEPO10 is a 1st generation FTN77 Fortran PC-programme designed to empirically predict ash deposition propensities in coal-fired utility boilers. Expectational data (empirical basis) from an EPRI-sponsored survey of ash deposition experiences at coal-fired utility boilers, performed by Battelle, have been tested for use on Danish coal chemistry - boiler operational conditions, in this study. (au) 31 refs.

  17. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Vincent A. R. Camara

    1999-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results.

  18. Empirical Scientific Research and Legal Studies Research--A Missing Link

    Science.gov (United States)

    Landry, Robert J., III

    2016-01-01

    This article begins with an overview of what is meant by empirical scientific research in the context of legal studies. With that backdrop, the argument is presented that without engaging in normative, theoretical, and doctrinal research in tandem with empirical scientific research, the role of legal studies scholarship in making meaningful…

  19. An optimal estimation algorithm to derive Ice and Ocean parameters from AMSR Microwave radiometer observations

    DEFF Research Database (Denmark)

    Pedersen, Leif Toudal; Tonboe, Rasmus T.; Høyer, Jacob

    channels as well as the combination of data from multiple sources such as microwave radiometry, scatterometry and numerical weather prediction. Optimal estimation is data assimilation without a numerical model for retrieving physical parameters from remote sensing using a multitude of available information......Global multispectral microwave radiometer measurements have been available for several decades. However, most current sea ice concentration algorithms still only takes advantage of a very limited subset of the available channels. Here we present a method that allows utilization of all available....... The methodology is observation driven and model innovation is limited to the translation between observation space and physical parameter space Over open water we use a semi-empirical radiative transfer model developed by Meissner & Wentz that estimates the multispectral AMSR brightness temperatures, i...

  20. Usability of a theory of visual attention (TVA) for parameter-based measurement of attention I

    DEFF Research Database (Denmark)

    Finke, Kathrin; Bublak, Peter; Krummenacher, Joseph

    2005-01-01

    four separable attentional components: processing speed, working memory storage capacity, spatial distribution of attention, and top-down control. A number of studies (Duncan et al., 1999; Habekost & Bundesen, 2003; Peers et al., 2005) have already demonstrated the clinical relevance......The present study investigated the usability of whole and partial report of briefly displayed letter arrays as a diagnostic tool for the assessment of attentional functions. The tool is based on Bundesen’s (1990, 1998, 2002; Bundesen et al., 2005) theory of visual attention (TVA), which assumes...... of these parameters. The present study was designed to examine whether (a) a shortened procedure bears sufficient accuracy and reliability, (b) whether the procedures reveal attentional constructs with clinical relevance, and (c) whether the mathematically independent parameters are also empirically independent...

  1. Empirical P-L-C relations for delta Scuti stars

    International Nuclear Information System (INIS)

    Gupta, S.K.

    1978-01-01

    Separate P-L-C relations have been empirically derived by sampling the delta Scuti stars according to their pulsation modes. The results based on these relations have been compared with those estimated from the model based P-L-C relations and the other existing empirical P-L-C relations. It is found that a separate P-L-C relation for each pulsation mode provides a better correspondence with observations. (Auth.)

  2. Economic reasons behind the decline of the Ottoman empire

    OpenAIRE

    Duranoglu, Erkut; Okutucu, Guzide

    2009-01-01

    This study addresses the economic reasons of the decline and fall of the Ottoman Empire. On the contrary to the previous researches, by undertaking both global and domestic developments, the paper examines the decline of the empire from an economical point of perspective. Although international developments such as industrialization in European countries, pressure on the Ottomans in terms of integrating with the world economy, global economic factors like depressions and war...

  3. An empirical investigation of Australian Stock Exchange data

    Science.gov (United States)

    Bertram, William K.

    2004-10-01

    We present an empirical study of high frequency Australian equity data examining the behaviour of distribution tails and the existence of long memory. A method is presented allowing us to deal with Australian Stock Exchange data by splitting it into two separate data series representing an intraday and overnight component. Power-law exponents for the empirical density functions are estimated and compared with results from other studies. Using the autocorrelation and variance plots we find there to be a strong indication of long-memory type behaviour in the absolute return, volume and transaction frequency.

  4. The conceptual and empirical relationship between gambling, investing, and speculation.

    Science.gov (United States)

    Arthur, Jennifer N; Williams, Robert J; Delfabbro, Paul H

    2016-12-01

    Background and aims To review the conceptual and empirical relationship between gambling, investing, and speculation. Methods An analysis of the attributes differentiating these constructs as well as identification of all articles speaking to their empirical relationship. Results Gambling differs from investment on many different attributes and should be seen as conceptually distinct. On the other hand, speculation is conceptually intermediate between gambling and investment, with a few of its attributes being investment-like, some of its attributes being gambling-like, and several of its attributes being neither clearly gambling or investment-like. Empirically, gamblers, investors, and speculators have similar cognitive, motivational, and personality attributes, with this relationship being particularly strong for gambling and speculation. Population levels of gambling activity also tend to be correlated with population level of financial speculation. At an individual level, speculation has a particularly strong empirical relationship to gambling, as speculators appear to be heavily involved in traditional forms of gambling and problematic speculation is strongly correlated with problematic gambling. Discussion and conclusions Investment is distinct from gambling, but speculation and gambling have conceptual overlap and a strong empirical relationship. It is recommended that financial speculation be routinely included when assessing gambling involvement, and there needs to be greater recognition and study of financial speculation as both a contributor to problem gambling as well as an additional form of behavioral addiction in its own right.

  5. Semi-Empirical Calibration of the Integral Equation Model for Co-Polarized L-Band Backscattering

    Directory of Open Access Journals (Sweden)

    Nicolas Baghdadi

    2015-10-01

    Full Text Available The objective of this paper is to extend the semi-empirical calibration of the backscattering Integral Equation Model (IEM initially proposed for Synthetic Aperture Radar (SAR data at C- and X-bands to SAR data at L-band. A large dataset of radar signal and in situ measurements (soil moisture and surface roughness over bare soil surfaces were used. This dataset was collected over numerous agricultural study sites in France, Luxembourg, Belgium, Germany and Italy using various SAR sensors (AIRSAR, SIR-C, JERS-1, PALSAR-1, ESAR. Results showed slightly better simulations with exponential autocorrelation function than with Gaussian function and with HH than with VV. Using the exponential autocorrelation function, the mean difference between experimental data and Integral Equation Model (IEM simulations is +0.4 dB in HH and −1.2 dB in VV with a Root Mean Square Error (RMSE about 3.5 dB. In order to improve the modeling results of the IEM for a better use in the inversion of SAR data, a semi-empirical calibration of the IEM was performed at L-band in replacing the correlation length derived from field experiments by a fitting parameter. Better agreement was observed between the backscattering coefficient provided by the SAR and that simulated by the calibrated version of the IEM (RMSE about 2.2 dB.

  6. Empirical distribution function under heteroscedasticity

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2011-01-01

    Roč. 45, č. 5 (2011), s. 497-508 ISSN 0233-1888 Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Robustness * Convergence * Empirical distribution * Heteroscedasticity Subject RIV: BB - Applied Statistics , Operational Research Impact factor: 0.724, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/visek-0365534.pdf

  7. Inland empire logistics GIS mapping project.

    Science.gov (United States)

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  8. Empirical Descriptions of Criminal Sentencing Decision-Making

    Directory of Open Access Journals (Sweden)

    Rasmus H. Wandall

    2014-05-01

    Full Text Available The article addresses the widespread use of statistical causal modelling to describe criminal sentencing decision-making empirically in Scandinavia. The article describes the characteristics of this model, and on this basis discusses three aspects of sentencing decision-making that the model does not capture: 1 the role of law and legal structures in sentencing, 2 the processes of constructing law and facts as they occur in the processes of handling criminal cases, and 3 reflecting newer organisational changes to sentencing decision-making. The article argues for a stronger empirically based design of sentencing models and for a more balanced use of different social scientific methodologies and models of sentencing decision-making.

  9. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  10. Developmant of a Reparametrized Semi-Empirical Force Field to Compute the Rovibrational Structure of Large PAHs

    Science.gov (United States)

    Fortenberry, Ryan

    energy surface. QFFs can regularly predict the fundamental vibrational frequencies to within 5 cm-1 of experimentally measured values. This level of accuracy represents a reduction in discrepancies by an order of magnitude compared with harmonic frequencies calculated with density functional theory (DFT). The major limitation of the QFF strategy is that the level of electronic-structure theory required to develop a predictive force field is prohibitively time consuming for molecular systems larger than 5 atoms. Recent advances in QFF techniques utilizing informed DFT approaches have pushed the size of the systems studied up to 24 heavy atoms, but relevant PAHs can have up to hundreds of atoms. We have developed alternative electronic-structure methods that maintain the accuracy of the coupled-cluster calculations extrapolated to the complete basis set limit with relativistic and core correlation corrections applied: the CcCR QFF. These alternative methods are based on simplifications of Hartree—Fock theory in which the computationally intensive two-electron integrals are approximated using empirical parameters. These methods reduce computational time to orders of magnitude less than the CcCR calculations. We have derived a set of optimized empirical parameters to minimize the difference molecular ions of astrochemical significance. We have shown that it is possible to derive a set of empirical parameters that will produce RMS energy differences of less than 2 cm- 1 for our test systems. We are proposing to adopt this reparameterization strategy and some of the lessons learned from the informed DFT studies to create a semi-empirical method whose tremendous speed will allow us to study the rovibrational structure of large PAHs with up to 100s of carbon atoms.

  11. First-principles studies of electronic, transport and bulk properties of pyrite FeS2

    Directory of Open Access Journals (Sweden)

    Dipendra Banjara

    2018-02-01

    Full Text Available We present results from first principle, local density approximation (LDA calculations of electronic, transport, and bulk properties of iron pyrite (FeS2. Our non-relativistic computations employed the Ceperley and Alder LDA potential and the linear combination of atomic orbitals (LCAO formalism. The implementation of the LCAO formalism followed the Bagayoko, Zhao, and Williams (BZW method, as enhanced by Ekuma and Franklin (BZW-EF. We discuss the electronic energy bands, total and partial densities of states, electron effective masses, and the bulk modulus. Our calculated indirect band gap of 0.959 eV (0.96, using an experimental lattice constant of 5.4166 Å, at room temperature, is in agreement with the measured indirect values, for bulk samples, ranging from 0.84 eV to 1.03 ± 0.05 eV. Our calculated bulk modulus of 147 GPa is practically in agreement with the experimental value of 145 GPa. The calculated, partial densities of states reproduced the splitting of the Fe d bands to constitute the dominant upper most valence and lower most conduction bands, separated by the generally accepted, indirect, experimental band gap of 0.95 eV.

  12. Ab-initio Computation of the Electronic, transport, and Bulk Properties of Calcium Oxide.

    Science.gov (United States)

    Mbolle, Augustine; Banjara, Dipendra; Malozovsky, Yuriy; Franklin, Lashounda; Bagayoko, Diola

    We report results from ab-initio, self-consistent, local Density approximation (LDA) calculations of electronic and related properties of calcium oxide (CaO) in the rock salt structure. We employed the Ceperley and Alder LDA potential and the linear combination of atomic orbitals (LCAO) formalism. Our calculations are non-relativistic. We implemented the LCAO formalism following the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma and Franklin (BZW-EF). The BZW-EF method involves a methodical search for the optimal basis set that yields the absolute minima of the occupied energies, as required by density functional theory (DFT). Our calculated, indirect band gap of 6.91eV, from towards the L point, is in excellent agreement with experimental value of 6.93-7.7eV, at room temperature (RT). We have also calculated the total (DOS) and partial (pDOS) densities of states as well as the bulk modulus. Our calculated bulk modulus is in excellent agreement with experiment. Work funded in part by the US Department of Energy (DOE), National Nuclear Security Administration (NNSA) (Award No.DE-NA0002630), the National Science Foundation (NSF) (Award No, 1503226), LaSPACE, and LONI-SUBR.

  13. In silico simulations of tunneling barrier measurements for molecular orbital-mediated junctions: A molecular orbital theory approach to scanning tunneling microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Terryn, Raymond J.; Sriraman, Krishnan; Olson, Joel A., E-mail: jolson@fit.edu; Baum, J. Clayton, E-mail: cbaum@fit.edu [Department of Chemistry, Florida Institute of Technology, 150 West University Boulevard, Melbourne, Florida 32901 (United States); Novak, Mark J. [Department of Chemistry and Applied Biological Sciences, South Dakota School of Mines and Technology, 501 E. Saint Joseph Street, Rapid City, South Dakota 57701 (United States)

    2016-09-15

    A new simulator for scanning tunneling microscopy (STM) is presented based on the linear combination of atomic orbitals molecular orbital (LCAO-MO) approximation for the effective tunneling Hamiltonian, which leads to the convolution integral when applied to the tip interaction with the sample. This approach intrinsically includes the structure of the STM tip. Through this mechanical emulation and the tip-inclusive convolution model, dI/dz images for molecular orbitals (which are closely associated with apparent barrier height, ϕ{sub ap}) are reported for the first time. For molecular adsorbates whose experimental topographic images correspond well to isolated-molecule quantum chemistry calculations, the simulator makes accurate predictions, as illustrated by various cases. Distortions in these images due to the tip are shown to be in accord with those observed experimentally and predicted by other ab initio considerations of tip structure. Simulations of the tunneling current dI/dz images are in strong agreement with experiment. The theoretical framework provides a solid foundation which may be applied to LCAO cluster models of adsorbate–substrate systems, and is extendable to emulate several aspects of functional STM operation.

  14. In silico simulations of tunneling barrier measurements for molecular orbital-mediated junctions: A molecular orbital theory approach to scanning tunneling microscopy

    International Nuclear Information System (INIS)

    Terryn, Raymond J.; Sriraman, Krishnan; Olson, Joel A.; Baum, J. Clayton; Novak, Mark J.

    2016-01-01

    A new simulator for scanning tunneling microscopy (STM) is presented based on the linear combination of atomic orbitals molecular orbital (LCAO-MO) approximation for the effective tunneling Hamiltonian, which leads to the convolution integral when applied to the tip interaction with the sample. This approach intrinsically includes the structure of the STM tip. Through this mechanical emulation and the tip-inclusive convolution model, dI/dz images for molecular orbitals (which are closely associated with apparent barrier height, ϕ_a_p) are reported for the first time. For molecular adsorbates whose experimental topographic images correspond well to isolated-molecule quantum chemistry calculations, the simulator makes accurate predictions, as illustrated by various cases. Distortions in these images due to the tip are shown to be in accord with those observed experimentally and predicted by other ab initio considerations of tip structure. Simulations of the tunneling current dI/dz images are in strong agreement with experiment. The theoretical framework provides a solid foundation which may be applied to LCAO cluster models of adsorbate–substrate systems, and is extendable to emulate several aspects of functional STM operation.

  15. Evidence-based Nursing Education - a Systematic Review of Empirical Research

    Science.gov (United States)

    Reiber, Karin

    2011-01-01

    The project „Evidence-based Nursing Education – Preparatory Stage“, funded by the Landesstiftung Baden-Württemberg within the programme Impulsfinanzierung Forschung (Funding to Stimulate Research), aims to collect information on current research concerned with nursing education and to process existing data. The results of empirical research which has already been carried out were systematically evaluated with aim of identifying further topics, fields and matters of interest for empirical research in nursing education. In the course of the project, the available empirical studies on nursing education were scientifically analysed and systematised. The over-arching aim of the evidence-based training approach – which extends beyond the aims of this project - is the conception, organisation and evaluation of vocational training and educational processes in the caring professions on the basis of empirical data. The following contribution first provides a systematic, theoretical link to the over-arching reference framework, as the evidence-based approach is adapted from thematically related specialist fields. The research design of the project is oriented towards criteria introduced from a selection of studies and carries out a two-stage systematic review of the selected studies. As a result, the current status of research in nursing education, as well as its organisation and structure, and questions relating to specialist training and comparative education are introduced and discussed. Finally, the empirical research on nursing training is critically appraised as a complementary element in educational theory/psychology of learning and in the ethical tradition of research. This contribution aims, on the one hand, to derive and describe the methods used, and to introduce the steps followed in gathering and evaluating the data. On the other hand, it is intended to give a systematic overview of empirical research work in nursing education. In order to preserve a

  16. Downside Risk And Empirical Asset Pricing

    NARCIS (Netherlands)

    P. van Vliet (Pim)

    2004-01-01

    textabstractCurrently, the Nobel prize winning Capital Asset Pricing Model (CAPM) celebrates its 40th birthday. Although widely applied in financial management, this model does not fully capture the empirical riskreturn relation of stocks; witness the beta, size, value and momentum effects. These

  17. Empirical Differential Balancing for Nonlinear Systems

    NARCIS (Netherlands)

    Kawano, Yu; Scherpen, Jacquelien M.A.; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    In this paper, we consider empirical balancing of nonlinear systems by using its prolonged system, which consists of the original nonlinear system and its variational system. For the prolonged system, we define differential reachability and observability Gramians, which are matrix valued functions

  18. Data-Driven and Expectation-Driven Discovery of Empirical Laws.

    Science.gov (United States)

    1982-10-10

    occurred in small integer proportions to each other. In 1809, Joseph Gay- Lussac found evidence for his law of combining volumes, which stated that a...of Empirical Laws Patrick W. Langley Gary L. Bradshaw Herbert A. Simon T1he Robotics Institute Carnegie-Mellon University Pittsburgh, Pennsylvania...Subtitle) S. TYPE OF REPORT & PERIOD COVERED Data-Driven and Expectation-Driven Discovery Interim Report 2/82-10/82 of Empirical Laws S. PERFORMING ORG

  19. A REVIEW of WEBERIAN STUDIES ON THE OTTOMAN EMPIRE

    OpenAIRE

    MAZMAN, İbrahim

    2018-01-01

    This study examines the secondary literature on Max Weber’s (1864-1920) writings onIslam and the Ottoman Empire. It demarcates approaches prevalent in the secondaryliterature. Three basic themes are apparent:- Section a) concentrates on authors who applied Weber’s concepts of patrimonialism andbureaucracy to non-Ottoman countries, such as Maslovski (on the Soviet bureaucracy)and Eisenberg (on China).- Section b) focuses on authors who studied the Ottoman Empire utilizing non-Weberianaboveall ...

  20. Protein-Ligand Empirical Interaction Components for Virtual Screening.

    Science.gov (United States)

    Yan, Yuna; Wang, Weijun; Sun, Zhaoxi; Zhang, John Z H; Ji, Changge

    2017-08-28

    A major shortcoming of empirical scoring functions is that they often fail to predict binding affinity properly. Removing false positives of docking results is one of the most challenging works in structure-based virtual screening. Postdocking filters, making use of all kinds of experimental structure and activity information, may help in solving the issue. We describe a new method based on detailed protein-ligand interaction decomposition and machine learning. Protein-ligand empirical interaction components (PLEIC) are used as descriptors for support vector machine learning to develop a classification model (PLEIC-SVM) to discriminate false positives from true positives. Experimentally derived activity information is used for model training. An extensive benchmark study on 36 diverse data sets from the DUD-E database has been performed to evaluate the performance of the new method. The results show that the new method performs much better than standard empirical scoring functions in structure-based virtual screening. The trained PLEIC-SVM model is able to capture important interaction patterns between ligand and protein residues for one specific target, which is helpful in discarding false positives in postdocking filtering.